<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Jon Simpson - Engineering Modern Systems</title><description>The personal blog of Jon Simpson, a software engineer and technology enthusiast.</description><link>https://jonsimpson.ca/</link><language>en-us</language><item><title>Git Worktrees: The Simplest Way to Parallelize Your Coding Agents</title><link>https://jonsimpson.ca/git-worktrees-simplest-way-to-parallelize-coding-agents/</link><guid isPermaLink="true">https://jonsimpson.ca/git-worktrees-simplest-way-to-parallelize-coding-agents/</guid><description>Git Worktrees: The Simplest Way to Parallelize Your Coding Agents</description><pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://git-scm.com/docs/git-worktree&quot;&gt;Git worktrees&lt;/a&gt; have existed for a while now. They give you the ability to have multiple working directories, all sharing the same git instance. This makes it easier to work on multiple pieces of work at the same time without the code from one interfering with the others. For example, I could work on a bugfix on one worktree, and focus on a feature on my main worktree.&lt;/p&gt;
&lt;p&gt;Worktrees make working with multiple coding agents on the same codebase at the same time even easier. I can focus on a larger UI-based task that uses my dev server while setting a handful of agents off to fix some bugs or little improvements. Yesterday I focused on a tricky feature requiring a bunch of thought about the domain model + UI changes while getting four agents to fix some bugs and add some new UI functionality in some of the app’s internal pages. I’ll explain the workflow in a second, but those agents fixing the bugs would commit when they’re done, and I’d quickly check their output and tell them to “open a PR”. One quick glance at the code and it’s merged. This is simplified a bit, but you really can achieve some great throughput in shipping with multitasking these agents.&lt;/p&gt;
&lt;p&gt;The best part about worktrees is that it pairs so well with using multiple agents at the same time. The agents won’t accidentally bump into each other since each agent has its own directory to work in. Coding agents are smart enough these days to &lt;code&gt;cd&lt;/code&gt; into the worktree and run all of its commands in there too. Doing the regular git operations like committing and opening PRs works as expected within a worktree.&lt;/p&gt;
&lt;h2 id=&quot;you-dont-need-to-know-much-about-worktrees-to-use-them&quot;&gt;You don’t need to know much about worktrees to use them&lt;/h2&gt;
&lt;p&gt;The great thing is that you can use worktrees with zero knowledge of what’s happening under the hood. Coding agents are smart enough to call the right git commands to set up a worktree and branch. With a little guidance, agents can consistently create worktrees in the right place using the right commands.&lt;/p&gt;
&lt;p&gt;Here’s what I’ve placed in my global &lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/memory&quot;&gt;CLAUDE.md&lt;/a&gt; file (&lt;code&gt;~/.claude/CLAUDE.md&lt;/code&gt;) to let the agent know what to do every time it’s asked to use a worktree:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;md&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF;font-weight:bold&quot;&gt;## Git Worktrees&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;When setting up git worktrees, always use &lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;`~/.worktrees/`&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt; as the base&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;directory. Name worktree directories using the pattern&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;`{repo-name}-{branch-name}`&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;. For example:&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;`git worktree add ~/.worktrees/my-project-my-feature -b my-feature main`&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then whenever I’m using &lt;a href=&quot;https://docs.anthropic.com/en/docs/claude-code/overview&quot;&gt;Claude Code&lt;/a&gt; to, say, fix a bug, I can just prompt it &lt;code&gt;in a worktree fix this UI bug in ...&lt;/code&gt;. It’s that easy. The Claude Code instance can remain in the same main working directory of the project and it’s smart enough to know to cd into that worktree directory for every command or tool it uses.&lt;/p&gt;
&lt;h2 id=&quot;dont-expect-it-to-be-a-silver-bullet&quot;&gt;Don’t expect it to be a silver bullet&lt;/h2&gt;
&lt;p&gt;There are some obvious limitations that worktrees don’t, and shouldn’t, solve. Depending on the project, if there are external dependencies (e.g. port numbers, databases, external APIs), things like running the dev server, tests, or code that affects anything outside the working directory could cause one agent to trip up another. Keep this in mind when using worktrees.&lt;/p&gt;
&lt;p&gt;Being able to isolate instances of your project from each other is the holy grail, as that allows the agent to fully do anything it deems useful to its task at hand. Run a dev server and have the agent use a browser? Sure. Let it go wild refactoring your database tables? Go for it. Git worktrees just handle the filesystem separation. Your own solution is needed for isolation — whether that’s assigning random ports, setting up and tearing down dependent services, or database-level separation. Don’t let this stop you from utilizing agents though! They can still be quite effective at reading and writing the code, running tests, and code linters. The extra flexibility you get from pushing agents to do bigger changes and providing them with ways to check their own work makes them even more powerful at building high-quality code.&lt;/p&gt;</content:encoded></item><item><title>From Phone to E-Ink: How I Built My Perfect Reading Setup with Wallabag</title><link>https://jonsimpson.ca/from-phone-to-e-ink-how-i-built-my-perfect-reading-setup-with-wallabag/</link><guid isPermaLink="true">https://jonsimpson.ca/from-phone-to-e-ink-how-i-built-my-perfect-reading-setup-with-wallabag/</guid><description>From Phone to E-Ink: How I Built My Perfect Reading Setup with Wallabag</description><pubDate>Tue, 03 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I love keeping up with tech news, especially nowadays with AI and agentic everything moving so fast. I read a lot to fill this desire to learn and stay up to date. To feed this insatiable need I follow a few blogs and daily newsletters. Way back in the day (circa 2010) I heavily used &lt;a href=&quot;https://en.wikipedia.org/wiki/Google_Reader&quot;&gt;Google Reader&lt;/a&gt; to keep track of my RSS feeds. Since that Google service went away, newsletters and tech news podcasts have helped fill the gap. I spend my mornings and evenings skimming through and either reading new content in the moment or saving it to be read later. All of this happens via my phone, so whenever I’m reading stuff in bed before passing out, the phone screen’s light isn’t great for falling asleep to. I much prefer reading from my Kindle since its e-ink screen is easier on the eyes, and it prevents me from going down all-consuming rabbit holes.&lt;/p&gt;
&lt;p&gt;One day, I came across an article mentioning that &lt;a href=&quot;https://kindlemodding.org/&quot;&gt;Kindles can be hacked&lt;/a&gt;. This was an eye-opener for me because I love hacking my devices and getting more functionality out of them, especially with all the different apps and services that others in the open-source community have added and contributed. One recent weekend I ended up hacking my Kindle, and it was awesome. After hacking you can use the regular Kindle experience, or click into a “KO Reader”-titled book which brings you into the wonderful open-source and feature-rich reading app called &lt;a href=&quot;https://koreader.rocks/&quot;&gt;KOReader&lt;/a&gt;. Some stand-out features are the mutliple file format support, an RSS feed reader, integration with &lt;a href=&quot;https://calibre-ebook.com/&quot;&gt;Calibre&lt;/a&gt;, and something called &lt;a href=&quot;https://wallabag.org/&quot;&gt;Wallabag&lt;/a&gt; for synchronizing content (kinda like Read It Later / &lt;a href=&quot;https://www.instapaper.com/&quot;&gt;Instapaper&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;I immediately started using the RSS feed reader feature. Enter a few RSS feed urls, and you can have your kindle fetch the latest news from those sites. This was great for reading some of the most common blogs I frequent. Over time, I encountered some pain points, such as bad HTML formatting of the posts, images not being correctly fetched, and some websites like Medium just didn’t work. Additionally, maintaining the list of RSS feeds was rough to do on the Kindle’s keyboard. I also wanted to be able to read any blog post, even if it wasn’t from a site that I had subscribed to. Being able to send those articles that I find from my newsletters or random sites to my kindle for later reading would be a dream for me.&lt;/p&gt;
&lt;p&gt;I was a big user of &lt;a href=&quot;https://en.wikipedia.org/wiki/Pocket_(service)&quot;&gt;Pocket&lt;/a&gt; too until that service closed down recently. Now I use Instapaper, but KO Reader didn’t have any integrations with that, and there didn’t seem to be any easy ways to sync saved articles to my Kindle. Beside the RSS feed reader was an app called called Wallabag that I’ve never heard of before. From some quick searching it sounds like it could help, but that deepdive would have to wait for a weekend when I’m free.&lt;/p&gt;
&lt;h2 id=&quot;wallabag&quot;&gt;Wallabag&lt;/h2&gt;
&lt;p&gt;Wallabag is a read-it-later self-hosted service, similar to Instapaper, where you can save articles and read them later. This service has numerous integrations, a chrome extension, phone apps, and web apps to help you add, read, and organize articles. As the Christmas holidays arrived, I found myself with a lot of free time and a desire to do some coding, but I forced myself not to engage in any work-related stuff. I remembered Wallabag and decided to fully dive into getting that set up on my home server and validating that it would solve all these needs I had.&lt;/p&gt;
&lt;p&gt;Much of the work of getting a local Wallabag instance created and running was sped up by getting a coding agent to drive most of the setup, including all the configuration for &lt;a href=&quot;https://docs.docker.com/compose/&quot;&gt;Docker Compose&lt;/a&gt;. Once that was running, I got the Wallabag Kindle app setup and configured to use my local Wallabag server. I could add articles to Wallabag through the web UI, via the android app, or a browser extension. Then on my Kindle, I just had to click to fetch the latest content from the Wallabag server and it all showed up! Magic.&lt;/p&gt;
&lt;p&gt;My main goal was to replace the functionality of the Kindle RSS reader. This was partially working now: I could add articles from any of my devices and they’d get synced to the Kindle, but there wasn’t a way to auto-add articles via RSS feed. Unfortunately, the authors of Wallabag are opposed to adding this feature. Thankfully, there are other open-source services and RSS readers that integrate with Wallabag. I tried using &lt;a href=&quot;https://miniflux.app/&quot;&gt;Miniflux&lt;/a&gt;, but it required each article to be manually saved to actually have it sent to Wallabag, so that wasn’t suitable.&lt;/p&gt;
&lt;p&gt;I decided to code exactly what I needed: a service that periodically pulls all the RSS feeds and blogs I’m interested in, then sends the new posts to the Wallabag API to actually add those articles. I created a small tool called RSS Wallabag. It’s a Python service that fetches all RSS feeds every hour. It has a mechanism to track which articles it has and hasn’t seen, allowing it to call the Wallabag API to add the new articles. It also extracts the article’s tags, so those show up alongside the article in Wallabag. Later on I also added the ability to proxy Medium articles through a free service called &lt;a href=&quot;https://freedium-mirror.cfd&quot;&gt;freedium-mirror.cfd&lt;/a&gt;, since fetching directly from Medium doesn’t return any content.&lt;/p&gt;
&lt;p&gt;Now, I can sync my Wallabag client running on my Kindle and it’ll fetch all the new articles and content from the Wallabag server. Images are included, and the formatting looks great. I can even manually save articles, and they’ll also show up. It’s fantastic, and I’m really enjoying it because I’m not staring at a screen into the late hours of the night fighting falling asleep.&lt;/p&gt;
&lt;h2 id=&quot;who-im-currently-following&quot;&gt;Who I’m currently following&lt;/h2&gt;
&lt;p&gt;This list is pretty new since I just started assembling this over the past couple months, but it mainly focuses on people who write about AI, agents, and where that’s all going. The AI hype is real.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://simonwillison.net&quot;&gt;simonwillison.net&lt;/a&gt; - absolutely great daily commentary on AI news&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://steve-yegge.medium.com&quot;&gt;steve-yegge.medium.com&lt;/a&gt; - creator of Beads and Gas Town&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://addyo.substack.com&quot;&gt;addyo.substack.com&lt;/a&gt; and &lt;a href=&quot;https://addyosmani.com/blog&quot;&gt;addyosmani.com/blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://ghuntley.com&quot;&gt;ghuntley.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://steipete.me&quot;&gt;steipete.me&lt;/a&gt; - creator of OpenClaw/Clawdbot&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://me.0xffff.me&quot;&gt;me.0xffff.me&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://karpathy.bearblog.dev&quot;&gt;karpathy.bearblog.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://blog.cloudflare.com&quot;&gt;blog.cloudflare.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.danshapiro.com&quot;&gt;danshapiro.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://sankalp.bearblog.dev&quot;&gt;sankalp.bearblog.dev&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://ashtom.github.io&quot;&gt;ashtom.github.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://lucumr.pocoo.org&quot;&gt;lucumr.pocoo.org&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title>To plan or yolo, and everything in between</title><link>https://jonsimpson.ca/to-plan-or-yolo-and-everything-in-between/</link><guid isPermaLink="true">https://jonsimpson.ca/to-plan-or-yolo-and-everything-in-between/</guid><description>To plan or yolo, and everything in between</description><pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When it comes to planning out a new feature (aka. developing the spec), there are a lot of great techniques and advice out there. There are also a lot of pitfalls with adopting more of these techniques than is useful for your day-to-day work. I just read a long but good post from &lt;a href=&quot;https://addyosmani.com/blog/good-spec/&quot;&gt;Addy Osmani on writing good specs&lt;/a&gt;. That post goes into a lot of the different considerations and a structured process for coming up with these plans. Most times, developers don’t need to go this crazy and adopt all of this for their own workflow. I personally think it would be hell to have to plan this way for the majority of my own work.&lt;/p&gt;
&lt;p&gt;As developers driving these coding agents, we have a wide range of possibilities: don’t even build a plan at all, all the way towards a plan consisting of a fully fleshed out multi-page technical brief. Putting a significant amount of your own time and effort into making the best plan doesn’t always translate to even better code. For simple things you’ll want simple solutions, and agents can be great at that since there’s only a small amount of possibilities of how it would be built. But the bigger and more complex the thing to be built is, the more decisions and guidance agents will need to have or else they can go off the rails and build something you don’t like. This is relevant anywhere - whether you’re working on your own side project, building fast at a startup, or jumping through dozens of different requirements at a large company. The context of what you’re building and what all the nonfunctional requirements are will affect how much time and effort goes into the planning phase.&lt;/p&gt;
&lt;p&gt;In personal projects, the quality and end result of the agent generated code is up to you, so the quality and effort barrier can be just about zero if you really wanted to. Just yolo ship whatever code it creates if it works, and don’t even bother looking at it. At startups, there are likely only a few requirements and quality bars that really matter (eg. the feature works, it doesn’t break the entire app, the code doesn’t look like a ball of spaghetti already). Then at large companies teams probably have to jump through hoops of people reviewing their plans, and take into consideration stuff such as UI design practices, localization, coding standards, technology choices, compliance, etc.&lt;/p&gt;
&lt;p&gt;As you can see, the planning phase for a personal project can be very small and to the point compared to also nailing down all nonfunctional requirements at a large company. Therefore, large companies and startups who already have well-ingrained processes and requirements lack the nimbleness of someone who can yolo everything on their own side projects. Consider detailing the requirements that are easier for an agent to follow into rules files and use them in your planning workflow. You may also consider reevaluating if some other requirements are actually necessary or too strict for this new agent-driven development world we’re all thrust into.&lt;/p&gt;
&lt;p&gt;As I’m writing this, I came across &lt;a href=&quot;https://ghuntley.com/papercuts&quot;&gt;Geoffrey Huntley’s post&lt;/a&gt; on “model weight first companies” - companies in which they don’t have to exhaustively instruct an agent to give it what they need since the default answer given likely will work best for their purposes. Non-model weight first companies have codebases, processes, and practices built up over years and will often have to provide a lot of requirements and context to the agent to get it to output what suits their needs. Think a one-liner prompt for building a new feature in a CLI tool vs having engrained practices and developer ergonomics that need to be adhered to for the same new feature in the CLI tool. This is another great way to describe how much more difficult and slow larger companies have it at using agentic coding tools. Best for a company, or even a team, to adopt or get eaten by your competitors.&lt;/p&gt;
&lt;h2 id=&quot;planning-with-agents&quot;&gt;Planning with agents&lt;/h2&gt;
&lt;p&gt;The best part of agents is having it take your learned practices, either through a prompt or previously written doc, and applying that to the current problem. You can quickly apply your or your team’s UI design best practices to a plan if it requires any UI work. The same goes for any other functional and nonfunctional requirements too, eg. every metric and report must take in the user’s preferred timezone and output the data according to that.&lt;/p&gt;
&lt;p&gt;Some teams have these planning rules broken out into their own documentation, others may have them always included in their agent’s context. Wherever it is, it guides the agent to automatically take these into consideration when writing the plan. Then all you have to do is review it. This can negate the impact that people and teams have when needing to plan out medium to large projects since the best practices can be applied by the agents, and quality can stay consistent if it’s easy for developers to do this each time they create plans.&lt;/p&gt;
&lt;h2 id=&quot;on-manually-crafting-the-right-context-and-instructions-for-each-agent-to-use&quot;&gt;On manually crafting the right context and instructions for each agent to use&lt;/h2&gt;
&lt;p&gt;Part of Addy’s post went into breaking down the spec into smaller tasks that are simpler for the coding agents to take and work on. This is good practice, but lean into agents doing this for you. The trap this post uses as an example (which some basic prompting and use of Beads could easily solve) is manually feeding the agent the right snippets of context and instructions for each task of the plan, eg.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;after the spec is written, your next move might be: “Step 1: Implement the database schema.” You feed the agent the Database section of the spec only, plus any global constraints from the spec (like tech stack). The agent works on that. Then for Step 2, “Now implement the authentication feature”, you provide the Auth section of the spec and maybe the relevant parts of the schema if needed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This level of involvement of directly providing the right context and instructions to an agent is too low level. You’re constrained by the speed at which you can copy/paste or manually type the context and instructions into the agent’s prompt. You may also be reaching a limit of how many agents you can drive by hand at a single time.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;/using-beads-to-supercharge-my-workflow&quot;&gt;this post on how Beads can speed up your workflow&lt;/a&gt;, I mentioned that Beads, a lightweight issue tracker built for agentic use, is great for managing the context and instructions for each plan’s task. An agent can be prompted to take the plan and embed the exact context and instructions that each task needs. Then when its time to build, a fresh agent picks up the task and has all the context it needs to successfully complete the task with the necessary quality.&lt;/p&gt;
&lt;p&gt;You can even automate multiple parts of this planning and building process, saving valuable time to focus on higher-level activities like planning and reviewing the resulting code. Agents can apply your best practices to plans, are able to break down plans recursively into smaller and smaller tasks with the necessary context they need. The prompts to do this are pretty simple. Check out that aforementioned Beads post for a lot of tips.&lt;/p&gt;
&lt;p&gt;Agents can do a lot more things than just write code. As mentioned earlier, they can effectively build plans that work with your codebase and best practices, and even do the gruntwork of breaking plans down into small tasks that agents can successfully implement. Give it a shot with your own workflows and best practices, you’ll see a lot of it can be automated, freeing you up to work on even more things.&lt;/p&gt;</content:encoded></item><item><title>Using Beads to supercharge my workflow</title><link>https://jonsimpson.ca/using-beads-to-supercharge-my-workflow/</link><guid isPermaLink="true">https://jonsimpson.ca/using-beads-to-supercharge-my-workflow/</guid><description>Using Beads to supercharge my workflow</description><pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Beads is a real game-changer for me. It’s such a simple tool, but I’m surprised the mainstream hasn’t caught on yet since it makes building with multiple agents so much easier to manage. I’ve been using Beads for several weeks now as my key way to organize myself when it comes to AI-assisted development. I haven’t seen that many articles out there of people sharing their experience and the usefulness of using Beads. So I definitely knew I had to write about it and share my own experience. Hopefully others give it a try.&lt;/p&gt;
&lt;p&gt;A quick tl;dr: Beads is a lightweight issue tracker that you and your AI agents use to build software, or really anything that an LLM can do! It beats the crap out of using just a markdown plan, and makes tracking work across one or multiple agents more effective.&lt;/p&gt;
&lt;h2 id=&quot;what-is-beads&quot;&gt;What is Beads?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/steveyegge/beads&quot;&gt;Beads&lt;/a&gt; is designed for use with agents, otherwise it’s just a CLI-based issue tracker. The way agents use beads so effectively is because it’s all a CLI-based tool that’s simple enough to explain in your coding agent’s AGENTS.md or a startup hook. It installs itself into your repo and everything is tracked within git, so it’s basically impossible to lose anything. Think of it as your swarm of agent’s working memory for what to work on next.&lt;/p&gt;
&lt;p&gt;Beads provides primitives like issue types (epic, task, bug, chore, etc.), a way to encode the dependencies between tasks (A -&gt; B -&gt; C, D -&gt; C), and fields for title, description, assignee, tags, etc. The key with beads is not directly using its CLI yourself. Instead, you talk to your agent which then runs the proper beads commands to create issues, tell you the status of epics, etc.&lt;/p&gt;
&lt;p&gt;Then the important part: when your agents are hungry for work, you just tell them to work on the next available beads issue, and let it chug away. The agent automatically figures out what’s available to work on, claims the issue for itself so others don’t also start working on it, then builds away until it’s done. When it’s about to stop, it marks the beads issue as complete. There you have it - this is the separation of the planning from the execution.&lt;/p&gt;
&lt;h3 id=&quot;better-than-markdown-plans&quot;&gt;Better than markdown plans&lt;/h3&gt;
&lt;p&gt;Markdown plans are great, but Beads takes them and makes building whatever is planned even better. It accomplishes this by making plans stateful, traceable, and usable by multiple agents over time instead of just one working away at the single plan. Sure, markdown plans can be updated and the state of the work can be tracked over time, but LLMs are notorious for lying and cheating and not completing everything you want out of them.&lt;/p&gt;
&lt;p&gt;The key point of this post is supercharging your markdown plans by putting them into beads. It’s a way for one or many agents to much more easily and successfully build the right thing with the right quality, reducing hallucinations and slop. It’s as simple as telling your agent to take the plan and put it into beads as an epic with subtasks. Putting this all into beads allows each agent to focus on just the work they need. More up-front planning is possible by putting more context into the epic and its subtasks. In the example below I go into iterating on the epic while it’s in beads - you can push up the complexity that agents face with limited context into this planning phase by pushing as much context as you want into each individual beads issue. And the best part with that is, you just literally prompt another agent to add more detail to that epic’s issues!&lt;/p&gt;
&lt;h2 id=&quot;impact-on-agents-and-humans&quot;&gt;Impact on agents and humans&lt;/h2&gt;
&lt;p&gt;There are benefits for both agents and humans when it comes to using beads.&lt;/p&gt;
&lt;h3 id=&quot;for-agents&quot;&gt;For agents&lt;/h3&gt;
&lt;p&gt;Beads provides agents a way to work on the next unblocked task since it takes dependencies into account. Your &lt;code&gt;AGENTS.md&lt;/code&gt;, a rules file, or hook will contain some instructions for how your agent should use Beads, therefore it knows the exact beads commands to run to claim an issue and get its relevant context. This allows your agent to focus on the one specific task, and do everything it’s instructed to do. Having agents work on a single, small piece of the work helps prevent an agent working on too much at once, which often means slower, larger context windows.&lt;/p&gt;
&lt;h3 id=&quot;for-humans&quot;&gt;For humans&lt;/h3&gt;
&lt;p&gt;Humans don’t, and shouldn’t actually need to use the Beads CLI directly - just ask the agent to create tasks, epics, show tasks, get the status of some work, etc. It’s way easier to not lose your work since everything is always automatically saved to git. All these markdown plans can clutter up your repo if you’re checking them in, but Beads stores its state in git either on your main branch or even better, a designated sync branch. It’s quite easy to see the progress of items through either the CLI or one of the web UIs. I personally use the npm package &lt;code&gt;beads-ui&lt;/code&gt; to visualize and read new and in progress work. Beads doesn’t change the way you use branches, commit, or create PRs - that workflow is all still up to you.&lt;/p&gt;
&lt;figure&gt;
  &lt;img src=&quot;/static/images/2026/01/beads-webui.png&quot; alt=&quot;Beads web UI showing a few tasks&quot;&gt;
  &lt;figcaption&gt;
    A third-party Beads web UI makes it easy to visualize and manage issues, epics, and tasks across your workflow.
  &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id=&quot;my-workflow&quot;&gt;My workflow&lt;/h2&gt;
&lt;p&gt;Here are two of my primary workflows that I use day to day at work and on my personal projects. The bugfixing workflow is a lot quicker and simpler since it doesn’t involve a whole bunch of planning, and is really just one-shotting fixes. The feature development one does a whole lot of planning and shows the real strength of beads.&lt;/p&gt;
&lt;h3 id=&quot;for-bugfixing&quot;&gt;For bugfixing&lt;/h3&gt;
&lt;p&gt;When I find a bug, I tell my agent to file a new issue with just enough context about the problem: &lt;code&gt;file a bd issue for: tickets page, tags filter doesn&apos;t prepopulate available tags&lt;/code&gt;. The agent then goes on and creates an issue for it in beads, responding with details about the issue it created.&lt;/p&gt;
&lt;figure&gt;
  &lt;img src=&quot;/static/images/2026/01/file-bd-bug.png&quot; alt=&quot;Using an agent to file a new bug using beads&quot;&gt;
  &lt;figcaption&gt;
    Using an agent to file a new bug through Beads streamlines bug tracking, with the agent automatically creating and updating issues from your instructions.
  &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;I often check the Beads issue right there in the agent output or Bead’s web UI to see if it went and added any extra context for the description. Sometimes I immediately call my slash command for adding more context: &lt;code&gt;/refine-bd&lt;/code&gt; which expands to the following prompt: &lt;code&gt;refine this bd issue or epic to verify that this is the best way to implement this and integrates well into the codebase. Make sure each task has all the detail necessary for an AI coding agent to properly implement this task at a later time. If anything gets too complex or large, split it into smaller issues. make sure the dependencies between other tasks make sense.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Then I can either save it for later or I get another agent to go and work on it. In a fresh agent I say &lt;code&gt;work on bd issue m-dy4&lt;/code&gt; or whatever issue ID beads automatically gave it. After it’s finished I check the agent’s output to see how well the agent solved the issue, whether the solution was in the right spot, and manually confirm it’s definitely fixed (most of the time). Then I tell it to commit.&lt;/p&gt;
&lt;h3 id=&quot;for-feature-development&quot;&gt;For feature development&lt;/h3&gt;
&lt;p&gt;For any new feature I basically follow this flow: I ask my agent to come up with the plan, for example: &lt;code&gt;Create a plan for adding the ability to have more support agent permission levels. regular agent: ..., team lead: ..., etc.&lt;/code&gt; I read the markdown plan it gives me to check its approach. I don’t really care about what steps it implements it in, really just the architecture, UI, data, and any other concerns relevant to the feature. I ask it clarifying and investigative questions to figure out if it’s the right approach and if there’s other alternatives. This often makes me confident enough that the plan is sound and uses the best design.&lt;/p&gt;
&lt;p&gt;Once I’m happy with the plan I have another slash command that takes a plan and converts it into a beads epic with multiple subtasks. This step bridges the well designed feature into the beads issue tracker. The &lt;code&gt;/plan-to-bd&lt;/code&gt; command I use here expands to: &lt;code&gt;turn this into a beads epic. create issues and subissues that best represent the chunks of work to complete. include lots of detail for each issue.&lt;/code&gt; That command then chugs away taking every detail, consideration, and step in the plan and creates an epic with multiple tasks for it.&lt;/p&gt;
&lt;p&gt;Initially the epic and its tasks still don’t have enough detail, maybe a few sentences and mentions of function names, but not much. We can do way better, and in doing so, the agents that build this later on have a much higher chance of success and issue-free code. We can basically go crazy and cram everything the agent would need in there: exact code to add, lines to change, stuff to import, tests to write, things to know, things to ignore, etc. There could also be some tasks that are too big and should be broken down into smaller chunks, which makes the agents work easier and cheaper if its context window isn’t blowing up by reading and writing a ton of code. There’s also interdependencies that the tasks have with each other within the epic that the markdown plan probably didn’t do that great of a job capturing. This prompt pumps so much context into all of the tasks. It’s my favourite step in this entire article. I run the &lt;code&gt;/refine-bd&lt;/code&gt; slash command 2-4 times, since a single iteration often doesn’t break things down all at once, add enough detail, or map out all interdependencies. Here’s that prompt: &lt;code&gt;refine this bd issue or epic to verify that this is the best way to implement this and integrates well into the codebase. Make sure each task has all the detail necessary for an AI coding agent to properly implement this task at a later time. If anything gets too complex or large, split it into smaller issues. make sure the dependencies between other tasks make sense.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;After a few loops of running &lt;code&gt;/refine-bd&lt;/code&gt;, we have a very detailed epic with many detailed subtasks, all in the proper order of implementation. This is the key to making hundreds or thousands of lines of code changes accurately since each agent has its small scope of work to do, and what matters most for its part of the work. This prevents it from having to guess what code it needs to write by going back and looking at the plan, and just making up shit while hallucinating what it should do to finish its job. Having everything detailed in the task makes it clear to the agent what exactly needs to be done so that it stops when needed. You can even give these builder agents the Sonnet-level of coding agent instead of the smartest Opus-level ones because of this.&lt;/p&gt;
&lt;p&gt;Then here’s the best part. I checkout a branch, start up a fresh agent, and ask it to &lt;code&gt;work on the next available bd task for epic m-onc. After completing the task, commit and stop&lt;/code&gt;. That’s also the &lt;code&gt;/work&lt;/code&gt; slash command. It chugs away doing all the work and commits when it’s done. I don’t even check what it’s written. All I know is that one of the tasks of the epic was claimed and completed.&lt;/p&gt;
&lt;p&gt;Next up, in the same session, &lt;code&gt;/work-post&lt;/code&gt; checks that it was implemented according to the task and that nothing else was missed. If it finds any bugs or edge cases that weren’t considered, it files new Bead issues for it: &lt;code&gt;anything else part of that still left to do? make sure the work was committed. notice anything that should be updated in the beads issues part of this epic or in general? Add or update it to a bd issue so that the work isn&apos;t lost and can be done later. reference the epic if the work is part of it&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;I’ve scripted a loop so that this builder agent does its work for a configurable number of iterations so that I’m not having to go back to the same agent and clear it then startup a new loop. As this loop runs I check out the issues it creates as it’s building and decide which ones are valid vs ones that I’ll just close since they’re not important enough. Oftentimes there’s real edge cases or bugs it finds in the related code it’s modifying, or duplication that should be avoided. The stuff that doesn’t really matter is small code duplication, and stuff about better logging or comments, for example.&lt;/p&gt;
&lt;figure&gt;
  &lt;img src=&quot;/static/images/2026/01/work-command.png&quot; alt=&quot;Using the work command to loop through and work on the next available beads issue&quot;&gt;
  &lt;figcaption&gt;
    The &lt;code&gt;work&lt;/code&gt; command streamlines the entire process, allowing agents to focus and automatically tackle the next available task in Beads. This hands-off loop is a game-changer for effortless, continuous progress.
  &lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;After the entire epic is completed and all the followup issues are tackled too, I often fire up my dev server to test out the new feature. As that’s going, in a new agent I also run the following code review slash command to find more bugs or potential issues. The &lt;code&gt;/review&lt;/code&gt; command expands to: &lt;code&gt;review the code modified in this branch. look for any bugs, redundant code, or obvious overcomplication. file beads issues for anything that comes up. call out if the newly found issue already existed or was introduced in this branch.&lt;/code&gt; I use a stronger LLM for this like Opus or GPT-5.2 to really churn away and inspect the code. This prompt ends up calling a lot of git commands and reads a lot of files to inspect everything. This agent often finds a few things, and running it again in a clean context can dig up more.&lt;/p&gt;
&lt;p&gt;As I’m manually testing it I file Beads issues as I find them in the same way as above, and review those that the &lt;code&gt;/review&lt;/code&gt; command found. I run that work loop again until all of those tasks are completed and manually verify everything is working as expected.&lt;/p&gt;
&lt;p&gt;When I’m happy with how things are - manual testing passed, no bugs being found, no more tasks to review or work on - I push it up and create a PR. As a PR, this is the first time I actually look at the code. I generally skim over it all and verify it’s not doing anything insane with data access, no messy spots, and the critical parts are safe. Then merge that sucker!&lt;/p&gt;
&lt;h2 id=&quot;the-power-of-this-workflow&quot;&gt;The power of this workflow&lt;/h2&gt;
&lt;p&gt;This workflow allows me to quickly and accurately build out new features and fix bugs much faster than just using Cursor’s IDE or multiple cursor-agents manually. The speedup gained when using Beads is massive, where I can highly rely on the resulting code since there’s already been many quality checks performed on it.&lt;/p&gt;
&lt;p&gt;I was explaining my workflow to a buddy who regularly uses Claude Code the other day and the best way I could explain my pivot from driving multiple coding agents at the same time to this Beads-powered &lt;em&gt;stand back and go wild&lt;/em&gt; way is because I spend more effort at the beginning and end stages of the development process.&lt;/p&gt;
&lt;p&gt;With directly driving coding agents, a lot of effort goes into the middle stage, where you watch over all the changes it’s making, telling it what to do next, asking it questions as it goes, committing, and really just doing what feels right - like how you would work without AI writing your code in the first place. The agent is speeding up your non-agentic way code is written.&lt;/p&gt;
&lt;p&gt;This new method with Beads puts more effort into the beginning stage to plan out what you’re trying to do, and make sure it’s the right design. The middle stage takes no real effort at all since it’s just getting the agents to loop through all tasks and build it. The end takes a similar amount of effort compared to the beginning since there’s all that manual testing, sorting through Beads issues the agents have raised, and the usual push up changes and opening of a PR.&lt;/p&gt;
&lt;h2 id=&quot;now-go-get-it&quot;&gt;Now go get it&lt;/h2&gt;
&lt;p&gt;In the end, leaning into agents to help you speed up planning and investigating the right approach with Beads being the context engine that your agents are driven from is such a powerful workflow. Now only if these plans can be created faster or these manual workflows I have can be automated more 🤔 Well, there are several scripts I’ve made that wrap these very common slash commands plus iterating several times (see the next section) that has helped a lot, and can go even further. There’s also this crazy orchestrator (a tool to further abstract away what is to be done from the agents that do the actual work) called Gastown I just tried out. I ended up burning through $100 from its own bugs. Oh well, that’s alpha, alpha software for you.&lt;/p&gt;
&lt;h2 id=&quot;extras&quot;&gt;Extras&lt;/h2&gt;
&lt;h3 id=&quot;my-own-scripts-to-automate-a-bunch-of-the-tediousness&quot;&gt;My own scripts to automate a bunch of the tediousness&lt;/h3&gt;
&lt;p&gt;It almost goes without saying, but by automating these tedious workflows so much time is saved, and the science of this becomes engineering prompts that encode the right software development practices and specifics for the codebases you’re working on. Aka. what do you actually care about, and don’t want to have to fix later or guide the agent to do manually.&lt;/p&gt;
&lt;p&gt;I’ve written a quick and dirty shell script that wraps cursor-agent to do a lot of the tedious steps such as the work loop I mentioned in the feature development section. That really just runs two prompts back to back in the same session then creates a new session and does it all again on the next piece of work. I’ve also adapted this to the earlier mentioned &lt;code&gt;review&lt;/code&gt; and &lt;code&gt;refine-bd&lt;/code&gt; prompts since those are also useful to run in loops.&lt;/p&gt;
&lt;p&gt;Any of these scripts can take in a Bead issue or epic, and that enriches the prompt to focus solely on that work. It’s not really advanced, just a bunch of prompts shoved together with some workflow logic. I spiced it up with the ability to stream the realtime outputs of the agent to the console instead of them buffering and outputting its output all at once. The markdown output of the agent is even properly rendered to look like rich text.&lt;/p&gt;
&lt;p&gt;As I was testing out the Gastown orchestrator for the first time I had it build a Golang port of these scripts. Some Gastown bugs prevented me from putting the final touches on it before writing this post. Gastown will probably remove my need for my own scripts, but the process of using my own tools and improving the prompts as I go is still invaluable since those will continue to be used and is a necessary skill to drive these agentic coders.&lt;/p&gt;
&lt;h3 id=&quot;things-that-i-want-to-improve-in-this-workflow&quot;&gt;Things that I want to improve in this workflow&lt;/h3&gt;
&lt;p&gt;Being able to use multiple git worktrees would be killer. At work we prefer using PRs for shipping all of our work, so working on multiple things at once on the same branch is tough if one thing is a feature and another thing is a completely random bugfix. Hopefully there’s an easy script I can come up with that allows me to quickly set up these worktrees, get in there with an agent or two, and run a dev server from any one of them. I feel like that’s currently blocking me the most. Gastown looks to have smoothed over working on multiple pieces of work via git worktrees, but knowing how this stuff is done myself helps with my own mental model of git.&lt;/p&gt;
&lt;p&gt;A better way to sort through all of the issues agents create would be great too. Currently they’re all added to the relevant epic, which is good, but often the &lt;code&gt;work&lt;/code&gt; loop can quickly pick those up and work on them without me being able to validate if they’re a concern or not. Most of the time it’s fine, I can drop the commit if I don’t want it. Potentially using Bead’s assignee field or &lt;code&gt;priority: backlog&lt;/code&gt; would signal that these are agent-created ones for me to sort through and not for the agent to work on. Gotta go try that out. The great thing is this is likely just a sentence or two added to the &lt;code&gt;work&lt;/code&gt; prompts!&lt;/p&gt;
&lt;p&gt;Similar to the last one, improving my &lt;code&gt;/work-post&lt;/code&gt; and &lt;code&gt;/review&lt;/code&gt; prompts to take a page from the &lt;a href=&quot;https://github.com/anthropics/claude-code/tree/main/plugins/pr-review-toolkit&quot;&gt;claude-code plugins&lt;/a&gt; can help make it even more accurate and effective. Adding in some of the code review wording and ranking the confidence of how accurate an issue is on a numerical scale would give me more info when reviewing the issues it files. That way critical bugs have a higher priority, and smaller stuff like small bugs, code duplications and comments get a lower score.&lt;/p&gt;</content:encoded></item><item><title>Takeaways from leaning into agentic coding</title><link>https://jonsimpson.ca/takeaways-from-leaning-into-agentic-coding/</link><guid isPermaLink="true">https://jonsimpson.ca/takeaways-from-leaning-into-agentic-coding/</guid><description>Takeaways from leaning into agentic coding</description><pubDate>Sat, 03 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I’m having an absolute blast playing the game of figuring out how to utilize coding agents to speed up software development at work and in my personal time. It’s been a wild ride witnessing this technology evolve so much over the past year, and discovering all the ways it can supercharge my own work.&lt;/p&gt;
&lt;p&gt;Cursor has been my main driver for the past two years. Tab completion was a great introduction to what the power of AI could be. Their model was a great improvement over VSCode’s Copilot at the time. It felt like magic when it was able to crank out several lines of exactly what I needed, or saved those valuable seconds making a repeating change.&lt;/p&gt;
&lt;p&gt;Then the agentic ability to modify code in the same file given some instructions showed the real power. Fast forward a few months and they were able to work across a few files, but poorly. Nowadays the LLMs’ large context windows and efficiency at using tools are so powerful at making wide changes across any size of codebase. What used to require careful manual coordination across multiple files now happens in a few dozen seconds. Building up that mental muscle to reach for an agent to make a change or go investigate how some code works is only going to pay off as these models get more capable with better access to tools and autonomy at making the right changes.&lt;/p&gt;
&lt;p&gt;Cursor has been my daily driver for my professional and personal projects but the way I’ve been using it has had some notable inflection points in the last few months as I pushed myself to offload even more work to the agents and find the balance of quality vs speed. Here are several work and personal takeaways of using agentic coding, as well as powerful workflows I’ve been exploring.&lt;/p&gt;
&lt;h2 id=&quot;experience-from-work&quot;&gt;Experience from work&lt;/h2&gt;
&lt;p&gt;Recently I’ve been on an absolute tear building out a contact centre product. It has been a great opportunity to get a lot of experience of what to do and not to do when it comes to AI-driven development. I’ve learned some hard lessons, but the productivity gains have been worth it.&lt;/p&gt;
&lt;h3 id=&quot;the-good&quot;&gt;The good&lt;/h3&gt;
&lt;p&gt;Given an existing contact centre UI but nothing in the backend existing, I had a big paint by numbers exercise to tackle. I started off with creating the core database models with the help of some AI, getting a thumbs up from some colleagues, then dived into wiring up each feature one at a time. I’d give an agent a line or two of instructions and tell it to wire up some frontend component to the backend models. This worked out very well since each feature at this point was just doing some CRUD work and needed an API, database manipulation, and some React logic. I still had to manually test each feature after adding it of course, but the implementation speed was incredible. What would have taken weeks of grunt work happened in days.&lt;/p&gt;
&lt;p&gt;Agents are incredible at debugging and fixing things. It makes bugfixing all the easier and quicker to do. Several times I’ve had a wow moment when pasting in a link to a bug from Sentry and having the Sentry MCP fetch everything about that issue. Then the agent just goes to town investigating and solving it. For most easy to medium difficulty bugs it solves it most of the time. Sometimes it nails the investigation but the fix it came up with wasn’t in the proper layer, so I tell it the proper way to fix things. What a time saver. No more context switching between Sentry, your editor, and multiple files trying to track down the issue.&lt;/p&gt;
&lt;p&gt;Another surprising thing is realizing the amount of code you can ship in a single day. 500-1000 lines a day initially felt wild, but now that’s table stakes. That doesn’t even count the tests, which can easily be 2-3 times more than that. Raising this number even higher does mean though that you’re spending less and less time looking at all the generated code. It’s a combination of trust that the AI won’t write a backdoor or superbly slow code, learning how to better prompt the AI, and continually improving the AGENTS.md file.&lt;/p&gt;
&lt;p&gt;The name of the game these days is how much functionality can you reliably ship? This is turning out to be quite a fun challenge, and the way forward is pushing more and more of our own tasks to an agent. Agents don’t just need to be the ones writing the code, they can do the bugfinding, planning and investigation, and even be a sounding board for ourselves.&lt;/p&gt;
&lt;h3 id=&quot;the-bad&quot;&gt;The bad&lt;/h3&gt;
&lt;p&gt;When building an embeddable chat widget for this help centre product, I had a similar situation where the frontend was all complete, and just needed to have the backend created for it. Right out of the gate I got agents to wire up the API endpoints and other logic needed for this widget. Over time though the frontend’s state management became an issue, where the data fetching and triggering of mutations just wasn’t using the well tested pattern of SWR and hooks. Haphazardly adding those in after the fact helped, but there still was this toplevel state being passed around to a lot of components. My own lack of attention to the choices the agents were making, and the signs of trouble they were running into with all the bugs they would introduce, meant adding new features and functionality slowed down a lot. With a fully functional support widget, I had several refactoring moments where lots of the state was cleaned up. My aha! moment was when the state of which page the user was on became too duplicated and cumbersome that I got an agent to rip it all out and replace it with react-router. No longer did we have a global state object to modify, all the necessary global state was nicely stored in react-router’s URL. Then the rest of the widget’s code could decide which component to render based on the URL. It all became much simpler and easier for the agent to reason about once this refactor was complete. In hindsight I should have decided on and added in these technology choices before building out the backend for this support widget, but I take all the trouble and anguish it’s caused me as a great learning opportunity. Agents are great at adding features quickly, but it won’t necessarily tell you when your architecture is becoming a mess.&lt;/p&gt;
&lt;p&gt;Some agent-written tests are absolute trash. Existing code smells relating to testing do apply here as well, such as some code just doesn’t need to be tested, or simple code is overly tested and should just be deleted. Each crappy, invaluable test just slows your entire test suite down. So with the speed of agents being able to write code and tests, a slow test suite can happen much quicker. I’ve seen agents write tests for code that tests the old, deprecated way of calling some code alongside the new proper way. When I saw this, this made me question why there’s still an old way to call this code, even when I want everything updated to use the new way. This is the agent unfortunately taking shortcuts at its work. Ideally it should properly refactor and change the codebase to keep the code simple.&lt;/p&gt;
&lt;h2 id=&quot;experience-from-personal-projects&quot;&gt;Experience from personal projects&lt;/h2&gt;
&lt;p&gt;In my personal time, I’ve been working on some fun projects I otherwise wouldn’t have gotten to. One of them is a &lt;a href=&quot;/dogfood-calculator&quot;&gt;calculator to determine how much dog food to feed my dog&lt;/a&gt;. This project was enjoyable because it solves the real problem of properly feeding a growing puppy the right amount of food based on their weight. Having AI hack together the React UI and calculations made its development so much quicker. I spent a respectable amount of time double-checking the calculations, since that’s the important part, and lo and behold it did get it wrong. My partner and I still use it every day. Getting the AI to design a cool but silly user interface, something that would have taken me forever to get the right Tailwind and other CSS working correctly, took just seconds. I likely would just never have added those design flourishes if it wasn’t for the AI building it.&lt;/p&gt;
&lt;p&gt;Another project I worked on was a server status page. I wanted a way to view all my home server’s metrics and status at a glance without having to SSH in and remember all the commands. My goal was to create a script that would automatically generate and upload a webpage to Cloudflare Workers, secured behind a page accessible only to me. After having to manually fight Cloudflare Workers and fetching API keys, I used AI to help plan the script’s development. I had it SSH into the server to identify where everything was located, what commands could be used, and which versions of software were available. Beyond spot-checking the results, I didn’t look at the code at all. And it works! This is the kind of project that would have languished in my todo list forever without AI assistance.&lt;/p&gt;
&lt;h2 id=&quot;planning-out-changes&quot;&gt;Planning out changes&lt;/h2&gt;
&lt;p&gt;Since Cursor and the other agentic development tools introduced the mode to create plans, I’ve been using it regularly for planning out features and larger changes. It works quite well at investigating and figuring out what to do. Plus, presenting me with its plan before doing anything is a great opportunity to validate it understands the problem and proper changes. I often ask for changes to the plan to use a different technology or tweak the logic it was proposing. Checklists of steps are useful for both me and the AI to know what to do, but most of the time the AI either misses a step, doesn’t factor in a requirement, or missed investigating something and later on something doesn’t integrate well - all meaning there’s a bunch of cleanup or follow-up to do. If most of this can be caught while planning, ideally by the agent thinking and researching hard enough, the resulting code ideally would be more correct and simple.&lt;/p&gt;
&lt;p&gt;Recently the name of the game is how much code can you ship since LLM costs are so reasonable given the value you get out of it. Projects having good AGENT.md files documenting the preferred code style, technologies, and where things should be located are all massive multipliers for getting quality code out of your agents. Manually modifying code is becoming the antipattern since it’s all about getting the AI to do more and more of your work much quicker and accurately, so any time spent doing things manually is time away from getting agents to do more work and improving the agent workflow.&lt;/p&gt;
&lt;p&gt;Most recently I’m getting into slash commands for doing things like committing, code review, bug finding, and investigating tasks. These are major speedups, especially when moving towards running multiple agents in terminals rather than one or two in Cursor IDE. I’m still using Cursor for viewing full files and reviewing diffs, but it’s becoming less and less. &lt;a href=&quot;https://cursor.com/cli&quot;&gt;cursor-agent&lt;/a&gt;, which is Cursor’s agentic CLI tool competitor to Claude Code, is pretty decent. It’s behind in a few areas where Claude Code is bleeding edge such as plugins, hooks and subagents, but that’s okay for now since I don’t need most of that.&lt;/p&gt;
&lt;h2 id=&quot;even-better-planning&quot;&gt;Even better planning&lt;/h2&gt;
&lt;p&gt;One of the major productivity boosts from the past couple of months is upgrading my use of the agent planning mode. After chatting with the agent to plan out a change or feature, instead of getting the agent to then go build all of that, I get the entire plan put into an issue tracker, where multiple agents later on can go and work from. That issue tracker is &lt;a href=&quot;https://github.com/steveyegge/beads&quot;&gt;Beads&lt;/a&gt;, a lightweight issue tracking system that the agents know how to use, and actually want to use. The big benefit from using this issue tracker is that it decouples the agent from the work to be done, therefore work can be picked up by multiple agents and the work can be iterated on over time instead of being lost in a large markdown file. The primary workflow I have with beads is that aforementioned planning, but also for tracking all the other bugs and things to do when I’m using agents to build for me.&lt;/p&gt;
&lt;p&gt;One paradigm shift that Beads doesn’t explain that well is only interacting with Beads through talking with an agent. There’s no real need to learn the Bead commands or manually type out the issues to create - just tell an agent to create Bead issues for whatever you want, and it’ll do it better and faster than you can. Also tell it to work on the next available Bead issue - it’ll start plowing through tasks very fast.&lt;/p&gt;
&lt;p&gt;My new workflow for when I have a new idea for a feature is working with an agent in planning mode to come up with a plan, but then ask it to refine its plan, adding in more detail, checking that things would integrate properly, etc. so that later on the agent picking up that piece of work would know exactly what to do. Once the markdown file is looking good - mostly I don’t look too closely at it, just verifying the important logic, architecture, and database models look right - I tell it to go ahead and create a Beads epic for this entire plan with each piece of work being broken down into subtasks. The agent then spits everything into Beads. I then ask the agent to continue refining those tasks, verifying it’s the right solution, have it add more context, figure out dependencies between tasks so that the ones that can’t be worked on aren’t started, and split out larger ones into smaller ones. Then the best part: spinning up a new agent and telling it to start working on that epic. That new agent then should have all the detail it needs and will implement it exactly as told. After that agent is done building, I get it to see if there’s anything else to add or update to the epic’s backlog, e.g., bugs, potential issues, then commit it. Rinse and repeat several times with a new agent for each task until the entire epic is completed and you basically have a fully working feature. I then go manually test the feature to make sure everything works, try to break it, and enter another handful of bugfixing or tweak tasks. Lastly, I run a few prompts to perform code review and bug-finding and we’re good to ship it. That’s an easy couple hundred to thousand lines added, even more with tests.&lt;/p&gt;
&lt;p&gt;This workflow of bringing the rough idea to an agent, having it iterate on the plan, verifying the plan and its design, then telling it to go wild and build it is empowering. Being able to automate these loops is something that is definitely on the minds of many and will make many developers unstoppable code machines. We’re already seeing it with many people able to commit magnitudes more code a day.&lt;/p&gt;
&lt;p&gt;I really resonated with one of &lt;a href=&quot;https://steve-yegge.medium.com/six-new-tips-for-better-coding-with-agents-d4e9c86e42a9&quot;&gt;Steve Yegge’s recent posts&lt;/a&gt; where he rightly says that rewriting code as an antipattern is a thing of the past. AI can rewrite swaths of code so quickly, so the cost is close to zero compared to adapting or refactoring inflexible code. Getting past the concept of reviewing all of your code is going to be considered a slowdown too. For most things you can get by with having the AI review the code for you - as long as you are also focusing on good architecture and fixing bugs and code smells, you’re golden.&lt;/p&gt;
&lt;h2 id=&quot;scripting-these-workflows&quot;&gt;Scripting these workflows&lt;/h2&gt;
&lt;p&gt;After using Beads for a few weeks, now I’ve figured out a few useful prompts to help with the process of idea -&gt; Beads issues -&gt; refine -&gt; build -&gt; bugfinding and cleanup. Each step involves a specific prompt that would be iterated on potentially several times to introduce enough certainty into the process that the thing was built correctly and with the necessary quality. Manually rerunning these prompts for each step is very tedious, so I had some shell scripts built that automate calling cursor-cli with a prompt or a series of prompts. They’re very scrappy right now, but prove their usefulness by doing things like looping several times to fetch the next Bead issue to tackle, then go build it, find bugs, and commit. It’s so simple, but does take the tedious commands out of the process. After the looping runs and does big chunks of work, I always go and manually test what it’s built and potentially even doing a quick check on the code. This means I’m reviewing larger chunks of work, and if there’s something that needs fixing, I’ll just tell an agent to go fix it in that moment, or get an agent to file a new Bead issue for fixing later.&lt;/p&gt;
&lt;p&gt;Steve Yegge’s multi-agent orchestration tool, &lt;a href=&quot;https://github.com/steveyegge/gastown&quot;&gt;Gas Town&lt;/a&gt;, has just been released and that gets me excited for the future of where AI-driven development is going. It’s these shell scripts that automate the tedious work of software development process, but supercharged. Besides his writing being very engaging, I’m looking forward to trying it out since it sounds like another large productivity boost. I don’t think I’m even there yet with getting in enough hours using Beads and multiple agents at the same time though, so baby steps. Over the next few months, Gas Town gaining the ability to use other agent harnesses instead of Claude Code, like Cursor CLI or OpenAI Codex, would be a great addition to get more adoption, and make it even easier for me to try it out.&lt;/p&gt;
&lt;h2 id=&quot;what-next&quot;&gt;What next?&lt;/h2&gt;
&lt;p&gt;If we’ve seen anything from 2025, it’s that these new agentic models and tools have taken over like wildfire. The gains in power and productivity will happen even faster. This year will likely continue that trend and I’m excited to have a front-row seat as it redefines our world.&lt;/p&gt;</content:encoded></item><item><title>How I Inadvertently Failed Half a Computer Science Class</title><link>https://jonsimpson.ca/how-i-inadvertently-failed-half-a-cs-class/</link><guid isPermaLink="true">https://jonsimpson.ca/how-i-inadvertently-failed-half-a-cs-class/</guid><description>How I Inadvertently Failed Half a Computer Science Class</description><pubDate>Sat, 29 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;While at a wedding a few months ago, I met this mutual friend who’s a year or two younger than me. We got chatting, and it turned out he worked in tech too — and, funny enough, also studied computer science at Carleton. At some point, I asked him if he’d ever come across any of my old notes that I used to post on my personal GitHub. He wasn’t sure, but he messaged a buddy to ask if he knew who’s notes they used.&lt;/p&gt;
&lt;p&gt;Later that night, he came over to me and told me: “You’re totally that guy”. Apparently I helped him and his friend pass a bunch of their CS courses. He even asked for a selfie to send to his buddy for proof haha. Then he dropped the wildest part: one year, about half of the students in one of those courses were caught cheating and got academically disciplined — all because they used my notes! Wow, that’s wild, and serves them right!&lt;/p&gt;
&lt;h2 id=&quot;how-it-all-started&quot;&gt;How it all started&lt;/h2&gt;
&lt;p&gt;Throughout my CS degree, I made a habit of taking detailed notes and collecting all work in every class, saving assignments, midterms, projects, answers, and putting them up on GitHub. I was inspired by a student a few years ahead of me who did something similar. They had posted their LaTeX notes for some of the upper-year classes, and I figured I’d do the same. I’d always upload my stuff after deadlines passed so nobody could cheat off it, but that didn’t stop students in future years.&lt;/p&gt;
&lt;p&gt;Every now and then, I’d notice traffic spikes on my GitHub repo stats whenever a course I’d taken came around again. I always had this sense that people were using my notes as a study resource — and I honestly liked that idea. Over the years, I’d get the occasional email from someone thanking me for helping them get through a course. Those were always nice to read.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hey Jon, I’m a second year student at Carleton and I just want to thank you for leaving your COMP 2402 Github repo public. I have high respect for people like you! Even though the assignments in your repo didn’t come in handy since our class didn’t have the luck of having Pat Morin as our prof, I still want thank you for posting them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;While in my fourth year, there was this one time a friend told me to check out the bathroom on the third floor of Herzberg Laboratories — the main CS building at Carleton. Apparently, someone had written &lt;a href=&quot;github.com/jonniesweb&quot;&gt;github.com/jonniesweb&lt;/a&gt; in big, bold marker across one of the bathroom stalls. I went to see it for myself, took a photo, and thought it was hilarious. I even posted the picture in the README of the &lt;a href=&quot;https://github.com/jonniesweb/comp2404&quot;&gt;COMP 2404 repo&lt;/a&gt; — which was the one getting the most traffic at the time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/11/bathroom.jpg&quot; alt=&quot;github.com/jonniesweb written in a bathroom stall&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;emails-professors-and-cheaters&quot;&gt;Emails, Professors, and Cheaters&lt;/h2&gt;
&lt;p&gt;Eventually, a few other repos started getting more attention:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I’m a Carleton computer science undergrad and I noticed that you have your COMP 2402 work public. As it turns out, many of the assignment questions you had are the same as the current semester’s. I just wanted to let you know in case you’d rather hide it from students currently taking the course.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Including a request I didn’t expect. One day I received a cease and desist email from the professor that ran one of these courses for several years in a row.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hello Jonathan,&lt;/p&gt;
&lt;p&gt;I would like you to please remove your COMP2401 assignments from your
public GitHub account:  &lt;a href=&quot;https://github.com/jonniesweb/comp2401&quot;&gt;https://github.com/jonniesweb/comp2401&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It’s still one of the top links when I google search “comp2401”, and
many dozens of students have plagiarized your code and submitted it for
credit over the past couple of years.  Since the assignments themselves
are my intellectual property and I own the copyright, they should not be
posted without my permission.&lt;/p&gt;
&lt;p&gt;Please let me know if you have any concerns or questions.  Thank you.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;My reaction? fuck ‘em. What a lazy professor. Just update your coursework! I do love the point about great SEO on “COMP 2401” though lol. This all stems from not updating the assignments, projects, or midterms between semesters, so my public notes were definitely being used by other students to get by in their class. I ignored the email, had a great laugh with friends, and thankfully, I had already graduated by that point.&lt;/p&gt;
&lt;p&gt;Then there was another email, this time from a student who told me that the year before, about half of the 200 students in a particular course had failed because they were caught cheating using my notes. That’s insane, but it was the right call. Don’t cheat.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hey dude. You don’t know me, but this post on reddit: &lt;a href=&quot;https://www.reddit.com/r/CarletonU/comments/89n6cg/i_need_a_lawyer/&quot;&gt;https://www.reddit.com/r/CarletonU/comments/89n6cg/i_need_a_lawyer/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;reminded me of the time I took &lt;a href=&quot;https://github.com/jonniesweb/comp2402&quot;&gt;COMP 2402&lt;/a&gt; in 2014 and half of the students in my class got in trouble for academic fraud because they all copied the answer from the same github page. Idk if you ever saw it, but &lt;a href=&quot;https://github.com/jonniesweb&quot;&gt;https://github.com/jonniesweb&lt;/a&gt; was one of the longest standing graffiti in the Hertzberg bathroom I’d ever seen.&lt;/p&gt;
&lt;p&gt;Anyways thought you’d find it funny.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The subject of that Reddit post also made me laugh. When I took this foundational COMP 2402 course on data structures, the professor and his teaching assistants had this great automated grading system. For each assignment, we would upload our Java source files to this system that would automatically run several test cases on it to grade whether or not we wrote the right implementation of a data structure. It was beneficial since we could get immediate feedback on our code, and even had the ability to resubmit our answers multiple times until we got the correct solution. But also if you’re gonna cheat, the direct proof is there since everyone’s source files are all uploaded into the system.&lt;/p&gt;
&lt;p&gt;One common theme during my times at Carleton was what I called the “culling of the herd” aka. the decreasing amount of students that remain in the program. So by fourth year you’re looking at a quarter of the number of students remaining. Academia is hard and not for everyone. There are plenty of good reasons for people to leave the program, but those who stoop low enough to cheat are missing the point of pursuing a degree and deserve to be reprimanded.&lt;/p&gt;
&lt;h2 id=&quot;so-what-now&quot;&gt;So what now&lt;/h2&gt;
&lt;p&gt;It’s been a few years since I last received any emails about students taking these courses and using my notes. They’ve probably aged out of existence with new professors bringing their own content. What still survives though are the people who benefited. Years later while working at Shopify, I ended up hiring an intern for my team who was also attending Carleton for their Computer Science degree. We got on the subject of Carleton one day and turns out they definitely had used my notes during some of their past courses, and here they were, being an excellent member of the team. They made it through their full CS program and are now a successful Senior Developer. What a wild full-circle moment.&lt;/p&gt;</content:encoded></item><item><title>Discovering podcasts in 2006</title><link>https://jonsimpson.ca/discovering-podcasts-in-2006/</link><guid isPermaLink="true">https://jonsimpson.ca/discovering-podcasts-in-2006/</guid><description>Discovering podcasts in 2006</description><pubDate>Sat, 01 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I was listening to my weekly &lt;a href=&quot;https://twit.tv/shows/security-now&quot;&gt;Security Now&lt;/a&gt; podcast that was referencing an old security issue from 2009. It’s been fixed, but apparently it’s happening again. This made me realize just how long I’ve been listening to some podcasts. Security Now, for example, has over a thousand episodes now, and I started listening when they were only around episode 40, which was during their first year. That’s twenty years, which is absolutely wild.&lt;/p&gt;
&lt;p&gt;I first got into podcasts back when I had an iPod Nano and a twice-weekly paper route. I’d stuff newspapers and flyers into their bags and then spend forty-five minutes to an hour delivering to about a hundred houses in my neighborhood. I did that from middle school until after high school. I have fond memories of feeling numb in the cold Canadian winters, delivering papers late at night, sometimes as late as 11 p.m. in negative twenty degrees with massive snowbanks and a heavy bag of papers.&lt;/p&gt;
&lt;p&gt;One of my favorite podcast memories was listening to the &lt;a href=&quot;https://majornelson.com/podcast/&quot;&gt;Xbox Podcast with Major Nelson&lt;/a&gt; and his co-hosts while on my route. The fun thing about that show was that after the main episode ended, they’d leave a minute or two of silence, then surprise you with an after-show where they’d just chat about random stuff for a few extra minutes. Their intro music was great, too. The funny part is, even though it was an Xbox podcast, I wasn’t really an Xbox gamer, I was a PC gamer, but I still loved hearing about the gaming news and technology advancements around the Xbox.&lt;/p&gt;
&lt;p&gt;Back to Security Now - I used to listen to their weekly episodes religiously. They explained topics like how internet and security protocols work, breaking down different systems and how they’ve evolved over time. That show gave me a lot of foundational knowledge I’ve used in my software development job and has given me a deep understanding of how these systems work.&lt;/p&gt;
&lt;p&gt;Another early podcast that captured the magic of the early internet being such a vibrant and entrepreneurial place for me was &lt;a href=&quot;https://www.youtube.com/DiggNation&quot;&gt;Diggnation&lt;/a&gt; with Kevin Rose and Alex Albrecht. That show was so cool and funny. Every week they’d talk about the top stories from Digg.com, drink beers, and just say whatever came to mind for those Silicon Valley guys. It really showcased the raw, chaotic coolness of the early internet.&lt;/p&gt;
&lt;p&gt;I’ll have to go back and figure out when I first started listening to podcasts, but I’m glad I don’t have to plug my iPod into my computer and sync it with iTunes to get the latest episodes.&lt;/p&gt;</content:encoded></item><item><title>Thirty one</title><link>https://jonsimpson.ca/thirty-one/</link><guid isPermaLink="true">https://jonsimpson.ca/thirty-one/</guid><description>Thirty one</description><pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As I turn thirty one, I’m in Montreal, a place I’ve been several times over the past decade for both work and play, but all amazing times. It’s not just a cool destination where the people speak a language I barely know, but more of a reflection back on the experiences with friends that have continued to strengthen or the career moments that grew into something greater. Reflecting on this shows myself my progress and proof of caring about the things that matter most to me.&lt;/p&gt;
&lt;h2 id=&quot;travel&quot;&gt;Travel&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/hawaii1.jpg&quot; alt=&quot;Hawaii&amp;#x27;s Waikīkī Bay from the beach&quot;&gt;&lt;/p&gt;
&lt;p&gt;This has turned into a great travel year. Hawaii in October, Dominican Repubilc in February, Prince Edward County, London, and Amsterdam in June, and Montreal now in October. A handful of weddings to attend too. Of all the travel, Hawaii had to be my favourite as my partner and I spent a couple weeks island hopping and driving their beautiful coasts. Some of the best moments were snorkelling for hours with plentiful reefs of fish and turtles, driving the narrow north coast of Maui, and waking up hours before dawn to see the incredible sunrise on mount Haleakala - truly beautiful. The food was top-notch, whether it was a lovely beach resort’s dinner on halloween night vibes, or tasting the best cooked octopus that has ruined every other octopus dish for me, we ate well. Renting a car while staying on each island was a necessity with being back there again.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/hawaii2.jpg&quot; alt=&quot;Hawaii&amp;#x27;s Haleakalā at sunrise&quot;&gt;&lt;/p&gt;
&lt;p&gt;We went to London for a close friend’s wedding (congrats guys), and this was my first time there. We had a few days leading up to wander around, check out the sights, and grab some great food and beers. One day we hit up Bermondsey for their &lt;a href=&quot;https://bermondsey-beer-mile.co.uk/&quot;&gt;beer mile&lt;/a&gt; - a stretch of breweries in the east end where you can easily hop from one brewery to the next. A great time. I also went to Fabric, a night club that hosted several DJs one night before the wedding, all spinning dubplate: Miley Serious, Dr Dubplate, and Megra. It was so gratifying hearing proper UK club music as I started my Electronic music fascination with the likes of The Prodigy, Chase and Status, and Sub Focus. No crappy 90’s or ought’s music with a baseline, this was real club music where the beat kept going all night.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/amsterdam.jpg&quot; alt=&quot;One of the many Amsterdam canals at night&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;life--dog&quot;&gt;Life / Dog&lt;/h2&gt;
&lt;p&gt;My partner and I picked up a puppy just a month ago! Poppy, a Portugese Waterdog, is already a central part of our lives making each day and night entertaining and fulfilling. The beginning was hard but it got much easier as time went on with her adjusting, getting house trained, and used to us not being near her every moment of the day. We’re now enjoying walks around the neighbourhood with her, introducing her to our friends and their dogs, and going to puppy training classes. She’s got an Instagram account too where she posts cute photos of herself accompanied with wild captions with her favourite EDM bangers. We’re looking forward to having her up to the cottage and out camping next summer to really enjoy the outdoors with her.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/poppy.jpg&quot; alt=&quot;Poppy the Portugese waterdog&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;work&quot;&gt;Work&lt;/h2&gt;
&lt;p&gt;Since this time last year, Mantle has continued to prosper and grow - all because of the powerhouse team of founders and first hires. We’ve hired several people since then, all bringing great strengths in support, marketing, sales, and engineering. That makes 16 of us now. These investments are paying off in terms of allowing most of us bogged down with support and bugfixing to get back to what we’re truly great at, and work on some of the more impactful stuff.&lt;/p&gt;
&lt;p&gt;One of the best times of year, and likely becoming an important company tradition now, is our yearly conference we throw: &lt;a href=&quot;https://heymantle.com/techtonic&quot;&gt;Techtonic&lt;/a&gt;. Happening the same week of Shopify’s main partner conference in Toronto, this year we hosted Techtonic as a multi-day event in Toronto’s Distillery District with many amazing talks to benefit the Shopify Partner community. I could talk all about how cool the conference was, but the best parts were getting the entire team together, some whom many of us haven’t met in person yet, and chatting with new and old faces who are customers or just in the ecosystem. Great people all around.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/mantle.jpg&quot; alt=&quot;Mantle Techtonic conference&quot;&gt;&lt;/p&gt;
&lt;p&gt;This year has increasingly shown my own abilities to get shit done by shipping even faster than I have ever before. I give most of this to my growth in knowledge and confidence of making changes to our million line codebase. It’s now common to dream up and design features or entire products with my CEO, Jordan, and go and build it all pretty quickly. My current project is leading the build of an entire contact centre product for both email and live chat support. This is my strong suit as I have several years immersed in this world from my Shopify days. We’re beta testing it now with ourselves and a few customers, but it’s been the most fun I’ve had at building in a while.&lt;/p&gt;
&lt;p&gt;Lastly, my abilities at offsetting our CTO’s bus factor has been helping a lot when it comes to all of our infrastructure we use. Namely Kubernetes, Kafka, and our Postgres DB has been problematic from time to time, therefore we’re able to tag team these issues quite well. All that time being fascinated at production infra at my previous jobs is continuing to pay off.&lt;/p&gt;
&lt;h2 id=&quot;cycling&quot;&gt;Cycling&lt;/h2&gt;
&lt;p&gt;Prince Edward County (PEC) is becoming a staple to visit every year. Besides the beer, wine, and food, the county roads are very scenic, attracting cyclists to the area. On one beautiful and windy morning, I left Picton to roll along the far eastern part of PEC. Leaving around 6 am allowed me to snap some beautiful shots of Lake Ontario, and the tailwind kept me cruising fast. The further east I went the more desolate it got with less and less houses and more farm fields. At least at this part of the morning there weren’t many cars, and there certainly weren’t many cars out that way. After making it eastward, turning to go back west on some different roads showed that I underestimated the amount of headwind I was going to face the entire way back. No sweat, just take it low and slow. All in all, it was a very beautiful and satisfying 60km. Can’t wait to explore more of the roads elsewhere in the county.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/PEC.jpg&quot; alt=&quot;View of lake Ontario from PEC&quot;&gt;&lt;/p&gt;
&lt;p&gt;We had the opportunity to join some friends out at Ste Anne’s Spa in August. It was a very relaxing and fun time. After seeing signs for Ste Anne’s Spa over the years, it’s proximity to Lake Ontario, rolling hills, and country roads, I knew I needed to bring my bike and enjoy the new scenery. The front desk staff at the spa were all too helpful in pointing out some popular roads for cycling. After a late night of several drinks, I was still able to head out at 6 am to avoid the traffic and enjoy the beautiful sunrise. From up on the hills I headed down towards the lake, giving me a decent descent to rip down and wake me up real quick. Midway through I noticed my energy reserves weren’t doing too well, likely from the beers, but I pushed on at an easy pace. After going along forested residential roads down near the lake, pushing back up into the hills proved to be more scenic with lots of farmland and pastures. I think that’s why I enjoy cycling in Collingwood so much - all of its fruitful farmland. My legs were feeling it at this point, and I tried to take the most direct route back to our place. Even with being out of gas, the beautiful scenery kept me going. In the last km, in the opposite direction I took off, I encountered my last challenge: a 37m climb that started out easy but then turned steep. Overall I arrived just in time for breakfast with the gang and 50 km to feel satisfied about. It’s pretty novel for me to have gone out and done an epic adventure before anyone has started their day yet. I remember that day being a tiring drive back to Ottawa too.&lt;/p&gt;
&lt;p&gt;Oh Collingwood. It’s now been three years of cycling its beautiful hills and paths. I only had a few days up there this year, so I decided to make the most of it by heading down to Creemore again. A great 78 km in the bag, with a stop in Creemore to have breakfast with my partner at Creemore Bakery, which makes an effort to support cycling in the area with their bike stands out front. It was nice going in the opposite direction I went last year to see things from a different angle, and to enjoy some of the roads that were just gravel last year.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/10/tavern.jpg&quot; alt=&quot;Tavern at the Falls&quot;&gt;&lt;/p&gt;
&lt;p&gt;A buddy and I had some gift cards for some local patio bars. Each patio was part of the “Tavern” umbrella. When we received these gift cards we were talking about the idea of making a day trip of cycling and having a beer at each one. Later on in the summer the idea was resurrected and a third friend joined us on the trip. Having already been to some of these Tavern locations in the past, we saved those best ones for last. The order we decided on was Tavern on the Island, Tavern at the Lake, Tavern on the Falls, Tavern at the Gallery, and lastly Tavern on the Hill. We picked the furthest out ones first then had the closest and best ones for last. We lucked out with it being a beautiful day, and beyond some gift card issues, each spot was busy and bumping. At Tavern on the Hill, our last stop, we celebrated with beers and dressed up hot dogs as we talked about how we would rank our least favourite to most favourite. The sun set and the live DJ set made this last stop a worthy finish to the tour. After almost 8 hours from start to finish, we only rode 31 km, but most of the time was enjoyed on those patios.&lt;/p&gt;
&lt;h2 id=&quot;the-next-year&quot;&gt;The next year&lt;/h2&gt;
&lt;p&gt;It’s exciting to be looking forward to this next year, epsecially in several different areas: having fun with Poppy as she grows up, the start of another winter season of skiing (hopefully I’ll get out west again), Mantle reaching another inflection point of growth, and some travel plans for Australia and even out to the east coast. It’s going to be a wonderful year.&lt;/p&gt;</content:encoded></item><item><title>Dogfood calculator</title><link>https://jonsimpson.ca/dogfood-calculator/</link><guid isPermaLink="true">https://jonsimpson.ca/dogfood-calculator/</guid><description>Dogfood calculator</description><pubDate>Mon, 15 Sep 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;My partner and I got a dog! We’re loving her. It’s been a few weeks of ups and downs as we figure out how to house train it, but it’s getting better every day. As Poppy grows, she also eats a lot - and it increases as the weeks go by. We’ve been feeding her a diet of 50% kibble and 50% raw dog food. The math is straightforward - until she grows into the next category of age or weight.&lt;/p&gt;
&lt;p&gt;I got bored one weekend and decided to whip up a simple calculator that, given her current age and weight, would tell us how much food to give her each time we feed her. Some AI-assisted coding and manual data entry for the feeding tables later, and here we have it: &lt;a href=&quot;https://dogfood.jonniesweb.workers.dev&quot;&gt;https://dogfood.jonniesweb.workers.dev&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The main feature I was looking for was the ability to use linear interpolation to determine the exact amount of food to give her for any weight or age, not just the broad ranges that the food packaging explains. For example, if she’s 7 lbs, then she needs more food than the 5 lb category and less than the 10 lb category - approximately 40% of the way through that 5-10 lb bracket.&lt;/p&gt;
&lt;p&gt;Another necessary feature was converting the kibble food amount from cups to grams to make it easy to portion out both kibble and raw food at the same time with a kitchen scale. Hardcoding the equivalent of 1 cup of kibble to grams solved that.&lt;/p&gt;
&lt;p&gt;Making the UX look nice and convenient to use is even simpler when the AI is building it. Saving the last values used for age and weight, as well as a cool spinning Poppy head background, makes it fun to use.&lt;/p&gt;
&lt;p&gt;Most of the time was spent figuring out how to deploy this thing. Apparently Cloudflare Pages is deprecated and Cloudflare Workers now support deploying static content. Hours later and it should be running forever and for free. Code is here if anyone wants to check it out: &lt;a href=&quot;https://github.com/jonniesweb/dogfood&quot;&gt;https://github.com/jonniesweb/dogfood&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This app has already come in handy several times. A few hours over the weekend well spent.&lt;/p&gt;</content:encoded></item><item><title>Just go build it!</title><link>https://jonsimpson.ca/just-go-build-it/</link><guid isPermaLink="true">https://jonsimpson.ca/just-go-build-it/</guid><description>Just go build it!</description><pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Looking back at the past couple of years of homeownership, it’s crazy thinking of how much stuff I’ve done myself. I can count on a single hand the amount of times I had to have someone come in and perform work for me: water heater fixing, dryer fixing, kitchen countertop installation, car maintenance. Yeah, that’s it.&lt;/p&gt;
&lt;p&gt;The list of things I have done is quite large in comparison. I don’t think I’ve actually itemized them yet, but here it goes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;replace some deck boards&lt;/li&gt;
&lt;li&gt;stain the deck&lt;/li&gt;
&lt;li&gt;repaint the entire house&lt;/li&gt;
&lt;li&gt;build an outdoor sectional sofa&lt;/li&gt;
&lt;li&gt;demoed a bathroom&lt;/li&gt;
&lt;li&gt;renovate a bathroom&lt;/li&gt;
&lt;li&gt;demoed a kitchen&lt;/li&gt;
&lt;li&gt;install kitchen cabinets&lt;/li&gt;
&lt;li&gt;tile a shower&lt;/li&gt;
&lt;li&gt;tile a kitchen backsplash&lt;/li&gt;
&lt;li&gt;tile a floor&lt;/li&gt;
&lt;li&gt;install a heated floor&lt;/li&gt;
&lt;li&gt;plumb in a few sinks&lt;/li&gt;
&lt;li&gt;install laminate flooring&lt;/li&gt;
&lt;li&gt;build and tile a shower niche&lt;/li&gt;
&lt;li&gt;plumb in a shower stall&lt;/li&gt;
&lt;li&gt;move an electrical box for a ceiling fan&lt;/li&gt;
&lt;li&gt;paint spray several closet and regular doors&lt;/li&gt;
&lt;li&gt;clean out a faulty freezer’s drain&lt;/li&gt;
&lt;li&gt;rebuild several windowsills&lt;/li&gt;
&lt;li&gt;paint an entire condo&lt;/li&gt;
&lt;li&gt;run ethernet cables through several floors&lt;/li&gt;
&lt;li&gt;build a few standing planter pots&lt;/li&gt;
&lt;li&gt;polish and wax my car&lt;/li&gt;
&lt;li&gt;paint an entire cottage&lt;/li&gt;
&lt;li&gt;furnished an apartment for rent&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Prior to owning my house I’ve only ever done some of the following tasks and their difficulties: deck staining (easy), installing laminate floor (easy), demolition (medium), and some light painting of rooms (easy), framing (medium). I did all of this with my family, who was also figuring it out as they went, as no one was in the trades. That’s probably where I got the seed to be a DIY’er, and the frugality of having someone do something I could likely easily do myself.&lt;/p&gt;
&lt;h2 id=&quot;incrementally-learning&quot;&gt;Incrementally learning&lt;/h2&gt;
&lt;p&gt;I didn’t have all the skills needed to even consider doing all these new tasks. When I bought my place I knew I wanted to paint my own unit and renovate the other unit, which was in major disrepair. Painting my unit was an easy start, I at least painted a bit in the past, probably several years earlier, but knew I could do it.&lt;/p&gt;
&lt;p&gt;As soon as I gained possession I wanted to start changing the colour of the place. It was simple to go and buy the necessary tools: brushes, rollers, naps, paint trays, drop sheets, painters tape, extender poles, and paint. Then when I started, I quickly found that I would need a ladder, handheld paint bucket, a very bright light, and technique! Oh the technique. I started delving into YouTube videos while exhausted late at night to discover some quite amazing channels dedicated to the different trades or DIY in general. Technique can take you from what might feel fast and accurate at applying an even layer of paint (but in reality quite slow and not great application), and open your mind to how the pros do it: with speed, accuracy, and efficiency. It’s not always having better tools that makes the difference, a lot of it is in the way you load up your brush or roller and get the paint on the walls or trim. I quickly learned quite a few techniques that helped me paint, but it took time to master them and hone in on it. I can look back at my own unit and pick out several techniques that I either didn’t learn early enough or just didn’t master yet. The proof is there, and when you know and have performed these techniques properly, they stand out. But that’s just something else you pick up from learning from these professionals: a high attention to detail.&lt;/p&gt;
&lt;p&gt;The biggest takeaway I have for any current and future DIY’ers is watch a lot of professionals on YouTube. There’s so much that can be learned, and many videos that directly explain the task you may be tackling next or potentially in the future. For example, I had no idea what would be involved for painting crown molding. Lots of places were pointing to having to use some significant chemicals to dull the sheen of the paint so that the new layer of paint will adhere correctly. In reality, no one is going to be touching that crown molding, so a quick clean of any potential dirt off of it will suffice before applying a fresh coat of paint. I would never have known this unless I was consuming hours of professionals share their tips on how they paint crown molding.&lt;/p&gt;
&lt;p&gt;Some recommendations for professionals that I’ve learned quite a lot from (read: watched dozens of hours of their content) are:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/@vancouvercarpenter&quot;&gt;Vancouver Carpenter&lt;/a&gt; - mainly for their caulking, painting, drywalling, and drywall mudding, and drywall repair content. He’s Canadian, a tradesman running his own business, and has been making great videos for a few years now. I’ve learned most of my drywalling, drywall repair and caulking skills from him.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/@Idahopainter&quot;&gt;Paint Life TV / Idaho Carpenter&lt;/a&gt; - mainly for their caulking and painting content. The owner of this professional painting company gets across a lot of information about how a professional painter team would paint a room, or entire building, and how someone by themselves would do it too. I have learned so much technique to my painting that has greatly helped with my cutting and rolling abilities. I now consider myself quite fast and accurate all from his technique.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/@HomeRenoVisionDIY&quot;&gt;Home RenoVision DIY&lt;/a&gt; - Jeff is great - but don’t hire him to do your job. The internet praises most of his content for being quite helpful and entertaining, but he has apparently scammed a few people when doing their renovations. He’s a professional handyman who has taught me almost everything I know for many subjects: tiling, plumbing, and renovations in general.&lt;/p&gt;
&lt;p&gt;Some of the folks that were more for entertainment purposes are the YouTube shorts of &lt;a href=&quot;https://www.youtube.com/@hydronyc&quot;&gt;HydroNYC&lt;/a&gt; and &lt;a href=&quot;https://www.youtube.com/@replumb&quot;&gt;Replumb&lt;/a&gt; - both plumbers doing funny videos.&lt;/p&gt;
&lt;h3 id=&quot;geeking-out-on-technique&quot;&gt;Geeking out on technique&lt;/h3&gt;
&lt;p&gt;There’s several categories of DIY that I’ve grown a ton at. Much of that was due to watching and reading tons of content, as well as hours and hours of practice. Here’s several categories which I need to nerd out over. Hopefully it’s relatable or a trick or two is taken away.&lt;/p&gt;
&lt;h2 id=&quot;the-art-of-painting&quot;&gt;The art of painting&lt;/h2&gt;
&lt;p&gt;Where to start. My favourite part of painting a house probably has to be rolling walls to an immaculate, streak free, textured finish. Some of the key techniques to get there are:&lt;/p&gt;
&lt;p&gt;Make it easier to clean the roller nap after by running water over the nap and then wringing out most of the water by spinning the nap on the roller. Do this in an area like the outdoors or in an area where the floor is protected since the water droplets can contain trace amounts of paint. This works by embedding water deep into the fibres of the nap to prevent the paint from embedding itself in there instead. Make sure there’s no lingering water or else that will have an effect on the paint.&lt;/p&gt;
&lt;p&gt;When loading up the roller with paint, run it back and forth in the tray for 20 seconds or so to really load it up into the fibres. You’ll want to do a few lines with the roller, and go back to smooth them out since the roller wont be applying the paint evenly. Once it’s properly broken in after a few lines on the wall, you’re good to go.&lt;/p&gt;
&lt;p&gt;Painting quickly and effectively means getting a decent amount of paint up on the wall (or ceiling) such that it’s not too thick or thin of a coating, and that there’s no lines between strokes. Paint Life TV taught me this critical technique that I’ll never forget. It starts with loading up your roller. Then you’ll want to roll straight down the wall, starting at least 1/4 away from the top ceiling. Roll back up and down once or a few times to get the layer looking fairly even. Now load up your roller again and do the next line, having a 1/3 to 1/4 overlap. Repeat four or so times. It’s okay if there’s slight paint streaks between the lines that were just rolled. It’s now time to load up your roller with 3/4 the amount of paint to give the wall its final texturing. Start at the very top and have the roller apply light pressure to the wall, and let gravity bring the roller down. This should have the effect of adding a lot of texture to the wall, without removing the paint you just put on. If there’s smudges or no texture spots then go back and do the texturing again. Do this light texturing over the four or so lines you just painted to add texture and clean up any slight lines that were left. Doing this should yield evenly coated walls that blend together with a perfect amount of texture. As you get good at it you can load up a fair bit of paint and reduce the amount of coats you have to do by going for a heavy first coat and a light second or third coat. Just be careful not to cause the paint to drip or slide on the wall as it dries, it’s quite hard to recover at that point.&lt;/p&gt;
&lt;p&gt;Using a double-wide roller (a 18” instead of a regular 9”) is an absolute life hack. It’ll save you so much time if you’re confident in your 9” rolling abilities. There’s a small learning curve with maneuvering a large roller like this, but if you have many walls or ceilings to paint, it’s easily worth its cost and time to learn. My last painting experience was painting an entire cottage over a few weekday evenings and a weekend. It flew by with the double-wide roller. I haven’t yet figured out how to speed up cutting, but lets get into that now.&lt;/p&gt;
&lt;p&gt;There’s quite a few considerations when it comes to cutting walls and ceilings. Brush size, how much paint you load the brush up with, the direction you cut in, how many coats you do, whether to also use a roller, etc. I like to use a high quality 4” brush since it can hold a fair bit of paint, as well as comfortably cover a decent amount of wall, ceiling, or baseboard with speed. That, paired with a paint pail allows me to conveniently work while standing or up a ladder. Cutting walls and ceilings is at least a two-coat process. The trick that makes the biggest difference with getting a properly textured corner without any sort of streaks in the corner is first painting in the corner with the brush, then using a small roller to put more paint on, and add the necessary texture. Try getting as close as you can to the adjacent wall with the roller so that the texture part covers over any brush strokes and blends in with the corner. A second coat may just require a second rolling if the paint brush got enough paint into the corner on the first coating.&lt;/p&gt;
&lt;p&gt;Similar to the paint roller, the paint brush can also be run through water before painting to make it easier to clean at the end of the painting session. Make sure to spin or flick as much water as you can out of the brush, otherwise the lingering water will cause some water to drip from the top of your brush while painting.&lt;/p&gt;
&lt;p&gt;The order of cutting the walls, ceilings, baseboards, and even crown molding (if you have it) can make the painting process quite quick or painfully slow. Cutting the baseboard into the already painted wall is an exercise in pain to get the top 1/4” covered if this is the scenario you’re in. Likely learned from Paint Life TV, the best order to paint in is doing any sort of trim first, then the ceilings, then finally the walls. This order allows for overlapping the trim paint onto the walls, for example, which guarantees that there’s no gaps showing unpainted surfaces. If the paint you’re using allows you to overlap the trim/wall/baseboard paint, do it, it’ll prevent any gaps from being missed. With doing a slight overlap when painting the ceiling, when you go to cut the walls into the ceiling, it’ll be a clean edge. Same with the walls cutting around the baseboards. I’ve wasted plenty of my time painting in the wrong order, especially with the crown molding present in every room of my house. Lastly, since at least two coats need to be applied to each surface, the order of the first coat doesn’t really matter. The second or further coats only matter for making it easy to cut and preventing paint splatter from the ceilings on to the walls and trim.&lt;/p&gt;
&lt;p&gt;When painting any sort of trim with paint that has a high sheen/gloss, it’s super easy and likely frustrating how many brush strokes show. Too much pressure or not enough paint exacerbates this. The trick of reducing the brush strokes is getting good at loading up your brush with a fair amount of paint, almost too much, and painting from the unpainted area into the painted area with an even but light amount of pressure. I find that having the brush on quite the angle, almost parallel to the wall, helps a lot with this. The first few strokes are to just get enough paint on the trim, to get it evenly covered. At this point you probably noticed a lot of brush strokes going in different directions, and spots where the brush stopped or started at. To do the finishing touches the general technique is to make some very light and long strokes. This has the effect of evening out the lines, and getting rid of any other lines going in different directions. Another technique is the landing and taking off method where when the brush starts to touch the trim the brush is already moving, resulting in a cleaner start. Same goes for ending the brush stroke by continuing to move along the trim while lightly lifting the brush off. With some practice that should have the effect of hiding the point at which the brush stopped quite nicely.&lt;/p&gt;
&lt;p&gt;Another trick with painting high gloss trim is to use a bit of paint extender in your paint. This stuff is great since it makes your paint less viscous which helps to hide even more brush strokes. Since the paint runs more, it can be harder to handle and if the paint is applied too thick, then it can run. Whatever you do don’t use actual water, since the paint extender is actually formulated to be added to paint.&lt;/p&gt;
&lt;p&gt;You should be able to get quite a few lifetimes out of roller naps and brushes with the proper cleaning techniques. Key to this is not letting any paint actually dry on them. So if you’re taking a break, make sure the naps and brushes are full of paint, and the paint trays or pails are covered with a plastic bag. When it comes time to clean your brushes, assuming it’s acrylic paint, turn the sink’s water on and run the brush under the water. Use a wire brush to scrape through the bristles to remove any stuck on paint matter and to help get the running water deep into the brush. Also dab the brush’s bristles on to the bottom of the sink to continue to get more paint out. Keep doing this until the water running out of the brush is white. Now flick or spin any extra water off of the brush. It’ll take a few minutes, but the brush will be just like brand new after. For cleaning roller naps, using one of the painter’s 5 in 1 tools is very useful. There’s a round part to the tool where you can run it along the roller to squeeze off all the paint that the roller is holding. Squeeze off that extra paint into the paint tray before taking the roller. While the nap is still on the roller, run the tap so that water pours onto the nap. Use the 5 in 1 tool again to squeeze off the water saturated nap. Keep doing this while rotating the roller, and flip it upside down a few times until the water coming off of the nap runs clear. Take the nap off of the roller and clean out the inside of the nap and the roller itself from any paint that may have leaked in. Put the nap back on and spin the roller fast to wring off any extra water. Leave the nap either hanging or standing up to dry. Depending on the sink you’re using to clean the naps and brushes the water coming off while cleaning can leave some marks. Be careful if it’s a kitchen sink. The best place would be a basement utility sink where making a mess doesn’t matter as much.&lt;/p&gt;
&lt;h2 id=&quot;the-art-of-tiling&quot;&gt;The art of tiling&lt;/h2&gt;
&lt;p&gt;The tiling process can be quite dirty and finicky. It’s also harder to undo mistakes compared to painting since tiles would be coated in mortar or stuck to the surface already. Even grouting can be unforgiving if some spots are missed. All this to say that it takes more preparation and double checking of your work at each step.&lt;/p&gt;
&lt;p&gt;Laying out the tile beforehand to get a sense of where cuts need to be made, lines would end up is critical to prevent an edge being just a bit too short on one side for a second tile to fit in. It also prevents any large expanses of grout covering a large area.&lt;/p&gt;
&lt;p&gt;When tiling a floor, if there’s any sort of unevenness on the ground using floor leveler to create a level surface is quite useful. It’ll provide more support and a smooth surface for the tiles to be attached to. You can even use the floor leveler after installing heated floor wiring to create a smooth surface over the protruding wires.&lt;/p&gt;
&lt;p&gt;When placing the tiles, make sure not to push it too hard on to the grout - you really only need a firm amount of pressure to seat the tile into the backing mortar. Pushing more than necessary can cause the mortar to push up around the sides, which is something to avoid since that will get in the way of the grout. You’ll want to scrape out any of that mortar within the cracks since it can poke through the grout and look terrible.&lt;/p&gt;
&lt;p&gt;Grouting can be quite messy, but the results look great and signal the end of the tiling process. I would recommend using a grout that has a built-in sealer since it removes the extra step of sealing the grout. People say that this is a more advanced type of grout since it has a shorter working time, and will ruin your tiles if it’s not applied and cleaned off the tiles quick enough. For example, the sealant will leave a permanent haze on the tiles if this grout is not fully buffed off after. In reality, it’s not that hard. Working in sections, having all the supplies for all the steps nearby, and having someone else to help makes it pretty easy.&lt;/p&gt;
&lt;h2 id=&quot;the-art-of-drywalling&quot;&gt;The art of drywalling&lt;/h2&gt;
&lt;p&gt;Drywalling can be pretty fun. Being able to shape walls from rough, barren 2x4s to a seamless surface is almost as satisfying as painting. Even patching holes, filling over cracks, and smoothing out bumps can be pretty decent. Drywalling definitely has a bit more of a learning curve, and can be more messy than painting because of all of the dust.&lt;/p&gt;
&lt;p&gt;Having some of the pink premixed drywall compound is useful for small knicks and patches, while also investing in a large bag of dry compound is great for fixing several small things or for any job larger than the pink compound would work for. The great thing about having the big bag of proper stuff around is that you can use as much or as little as you need, at the right consistency, and you don’t feel bad about wasting any compound since it’s cheap.&lt;/p&gt;
&lt;p&gt;When applying drywall compound I find that having an assortment of drywall knives helps greatly with being able to use the right sized tool for the job. Small 1” knives for the smallest of areas, a 4-6” one for the larger patches, and a 10” for large areas which require more control over applying compound evenly all the while feathering the edge.&lt;/p&gt;
&lt;p&gt;Speaking of feathering the edge, being proficient at applying compound such that it blends in with the rest of the wall or ceiling takes some handy knife or trowel skill. The goal is putting pressure on the outside edge of the tool to cause the compound to blend into the surrounding drywall. If you were to look at the wall and where the compound was just applied, you’ll want a very smooth transition from drywall to the compound. Feather the edge all the way around the area with the compound and you’re set. The Vancouver Carpenter taught me everything I know about drywalling, and his feather the edge comments throughout most of his drywalling videos helped.&lt;/p&gt;
&lt;p&gt;When applying the compound, avoid pushing too hard since that can make it more difficult to sand the compound later. Applying enough force to get the compound to stick and spread is all that’s needed.&lt;/p&gt;
&lt;p&gt;When it comes to filling in areas wider than an inch or two the difference between noticing the wall was patched from a distance and blending in completely when close up comes down to how close you can match the level of the patch with the rest of the wall, how smooth the compound blends in with the surrounding wall, and the paint job afterwards. Imagine patching a big hole like a wall switch. The drywall is attached, and when it comes to applying the drywall compound, apply compound that’s at least twice as wide as the hole being patched. This allows the compound to smooth out any bumps or ridges by gradually merging the patch into the surrounding drywall. Having a large putty knife such as a 10” one makes it a lot easier to smoothly apply the compound. Sanding can remove mistakes, but the more effective at applying the compound you are, the less sanding is required.&lt;/p&gt;
&lt;p&gt;With small patches it’s possible to perform some sanding before the compound has fully dried. Commonly called wet sanding, using a damp cloth or putty knife to very lightly smooth out the compound makes it possible to remove ridges or take away excess compound. This method is great for when filling small holes since not much putty is needed in the first place, saves time, and results in no dust.&lt;/p&gt;</content:encoded></item><item><title>Fixing WiFi for Fun and Profit</title><link>https://jonsimpson.ca/fixing-wifi-for-fun-and-profit/</link><guid isPermaLink="true">https://jonsimpson.ca/fixing-wifi-for-fun-and-profit/</guid><description>Fixing WiFi for Fun and Profit</description><pubDate>Sun, 01 Jun 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, a buddy and I had the opportunity to help fix the WiFi at a couple of local bars, both part of a larger chain. Being an Ottawa local, this sounded like a great gig - I love a good patio, and beer doesn’t hurt either.&lt;/p&gt;
&lt;h2 id=&quot;first-location-the-heritage-building&quot;&gt;First Location: The Heritage Building&lt;/h2&gt;
&lt;p&gt;The first location was an old heritage building with a large, scenic patio. After talking with the general manager, I learned they were running a creative setup with both wired and wireless networking gear: a combination of WiFi routers turned into switches with SSIDs combined so devices like ordering iPads and payment terminals could seamlessly roam between all access points to serve the bar’s patrons from anywhere.&lt;/p&gt;
&lt;p&gt;The issue turned out to be straightforward but problematic. One access point wasn’t in bridge mode, which meant devices connected to it were on their own subnet and couldn’t communicate with other devices, including the server that runs the restaurant’s ordering system and all the order printers. I assigned manual DHCP leases to the order printers so that when they’re eventually reset for whatever reason, they’d still receive the same IP address and keep working with the restaurant management server.&lt;/p&gt;
&lt;p&gt;I also hooked up a new WiFi router on the other side of the bar to boost signal coverage. Thankfully, it’s a very open and airy space, so the signal travels far without much interference from other networks.&lt;/p&gt;
&lt;h2 id=&quot;second-location-the-museum-patio-bar&quot;&gt;Second Location: The Museum Patio Bar&lt;/h2&gt;
&lt;p&gt;The second location was another patio bar in downtown Ottawa, situated off of a museum. It’s wide open and secluded, which should have made this an easy job to configure the network and make sure the signal is strong everywhere. However, it turned into a larger issue when we discovered that the internet had been working but then stopped a few days before we were scheduled to show up. The bar was set to open the next day, so fixing the internet became the top priority or else they may have to delay opening.&lt;/p&gt;
&lt;p&gt;I found myself hanging out in the bar’s small office right off the kitchen while the team was busy testing menus, training new cooks, refreshing landscaping, and fixing deck boards — lots going on all at once. We immediately became “the IT guys” to everyone who saw us working. Roles were clearly defined as the patio was brought back to life after a long winter.&lt;/p&gt;
&lt;p&gt;We couldn’t get the gateway router to dynamically receive a DHCP address, so we tried setting the static address, gateway, and subnet provided by the museum’s uplink. No luck. We connected the bar’s computer directly to the incoming ethernet and tried the same configurations, but still nothing worked.&lt;/p&gt;
&lt;p&gt;At this point, I headed back home to grab an old Linux laptop with a built-in ethernet jack. Using ChatGPT for guidance, I started testing whether ARP was even working. Basically checking if there was another device on the other side of the ethernet and fiber run that was responding. Running &lt;code&gt;tcpdump -i en2ps0 arp&lt;/code&gt; was incredibly helpful in confirming what the issue was, as it showed me the raw ARP requests going across the network. I could see my computer sending messages, but nothing was coming back. Not a good sign. This was likely something that required the museum’s IT team to resolve. Probably a broken cable or misconfiguration on their end.&lt;/p&gt;
&lt;p&gt;We ended up forming a contingency plan with the general manager for the bar to open the next day. The plan combined the restaurant management software’s ability to work offline with a credit card payment terminal that used cellular service.&lt;/p&gt;
&lt;p&gt;On the plus side, we enjoyed a pint of beer on the patio as the first patrons of the season.&lt;/p&gt;
&lt;h2 id=&quot;museum-part-two-success&quot;&gt;Museum Part Two: Success&lt;/h2&gt;
&lt;p&gt;We later found out it was indeed a fiber optic cable issue, which the museum’s IT team confirmed and fixed. With internet restored, we returned to focus on getting the newly installed WiFi mesh network set up using the same WiFi SSID, testing signal strength and speeds, and confirming all ordering tablets and payment terminals were connected properly.&lt;/p&gt;
&lt;p&gt;After setting up the mesh network, I used WiFiman from Ubiquiti to test signal strength and roaming capabilities throughout the entire bar. We achieved great signal strength everywhere, so devices shouldn’t have any connectivity issues regardless of location. We did notice that internet speed was 30 Mbps down — not blazing fast by today’s standards, but absolutely sufficient for payment processing devices and streaming some Spotify music. Hopefully employees don’t jump on the network and stream too many videos on their breaks, which could slow things down.&lt;/p&gt;
&lt;p&gt;I noticed something interesting during the speed tests: when connected to some WiFi access points, I’d get the full 30 Mbps, but others only delivered 8-10 Mbps. Digging into the WiFi mesh settings, I discovered two of the three access points had link speeds of 10 Mbps instead of 100 Mbps — definitely the bottleneck. We narrowed this down to one device being directly connected to the main POE switch (the 100 Mbps one), while the troublesome ones (10 Mbps) were located on the far side of the bar. These problematic access points were using very cheap PoE switches to provide ethernet and power to other devices at a satellite ordering station and bar.&lt;/p&gt;
&lt;p&gt;We determined that everything should function well enough even with the 10 Mbps limitation, but if there are slowness or connectivity issues on the far side of the bar, those cheap PoE switches should be the first thing to replace.&lt;/p&gt;
&lt;p&gt;Last but not least, we finished that afternoon with a well-deserved pint on the patio.&lt;/p&gt;</content:encoded></item><item><title>One year at Mantle</title><link>https://jonsimpson.ca/one-year-at-mantle/</link><guid isPermaLink="true">https://jonsimpson.ca/one-year-at-mantle/</guid><description>One year at Mantle</description><pubDate>Sun, 20 Apr 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I’ve just passed my one‑year anniversary at Mantle. LinkedIn posted a rather lackluster anniversary message, so I decided to do something better—albeit a couple months late. As Jordan, Mantle’s CEO, jokingly noted, it’s no coincidence that since I joined, we’ve gone from zero revenue to where we are today. But in all seriousness, it’s been an action‑packed year of personal growth, impressive product development, and a company trajectory that’s unmistakably headed up and to the right. It’s also been a rewarding challenge to dismantle the habits and norms I developed at a large tech firm and embrace a “just get it out there and into customers’ hands” mentality. Every few months we hire someone new, hit another milestone, or experience fresh growing pains—all of which make each day truly exciting.&lt;/p&gt;
&lt;p&gt;After a year, you have a clear understanding of the company’s priorities, direction, and preferred trade‑offs. This alignment empowers you to work independently—often taking an idea or new feature from concept to production without needing anyone else’s approval. As a result, we can deliver new features to customers incredibly quickly. When a customer requests assistance or new functionality, nothing stands in our way: we can respond, build the solution, and get it into their hands. It’s no wonder our customers are consistently impressed by how fast we solve their problems.&lt;/p&gt;
&lt;p&gt;In a software role, one year of experience equips you to navigate and understand our substantial 500,000‑line codebase, making new features, bug fixes, and even entirely new products feel second nature. The best way to gain a broad understanding of any codebase is to dive in: fix bugs, ask questions about implementation details and expected behavior, and read a lot of code. Mastering your tools and embracing the team’s development practices makes it easier to deliver the right balance of product, quality, and speed.&lt;/p&gt;
&lt;p&gt;After working together long enough, the trust you build with your teammates enables you to perform at a high level and make the right decisions and trade‑offs—knowing when to engage the team and when to run with an idea. Early on, I tended to overcommunicate to ensure I was headed in the right direction, understood priorities, and grasped how things worked. But with a proven track record, I now rarely need to overexplain; I understand what’s needed, and my colleagues trust me to do the right thing.&lt;/p&gt;
&lt;p&gt;A candid conversation with Jordan about our one‑year milestone and the company’s growth made me realize one thing missing from this reflection: the sense of belonging I feel with the team. It no longer feels like I’m the new engineer striving to match the founders’ level. Instead, the trust we’ve built and the ability to operate at the same level truly reminds me how far I’ve come.&lt;/p&gt;</content:encoded></item><item><title>Using Codesandbox for coding interviews</title><link>https://jonsimpson.ca/using-codesandbox-for-coding-interviews/</link><guid isPermaLink="true">https://jonsimpson.ca/using-codesandbox-for-coding-interviews/</guid><description>Using Codesandbox for coding interviews</description><pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At Mantle, we’re growing our team, including engineers! I was the hiring manager responsible for hiring a Support Engineer to help balance the support workload. Currently, everyone contributes to supporting our customers in one form or another, but our growing customer base has reached a tipping point where we can justify hiring someone dedicated to supporting this growth and focusing on customer-driven product improvements.&lt;/p&gt;
&lt;p&gt;Since we’re interviewing external candidates instead of hiring people we already know, it’s essential to build confidence in the candidate by understanding their skills and their fit for both the role and the company. Being solely responsible for the technical screening meant I needed to assess their coding skills, along with other engineering-related qualifications. I needed to be able to pair-program with them as they wrote their code. Having previously conducted dozens of interviews for technical roles at Shopify, we had a well-oiled process and set of tools at our disposal. These enabled any interviewer to gain enough confidence by the end of the interview to determine whether the candidate would be successful or not.&lt;/p&gt;
&lt;p&gt;At a startup, however, you don’t have any of that. You start from scratch, or, if you’re lucky, you can improvise based on a process someone else documented. At this stage, you just have to figure out what works—and if you don’t have to spend any money doing so, all the better.&lt;/p&gt;
&lt;p&gt;Assessing coding abilities is relatively easy if you’re able to share the same screen and keyboard as the candidate. You present a problem and work with them to solve it, giving nudges along the way and asking questions to explore their abilities. In the remote-first interview world, you’re likely on a video call with the candidate, so you need to either share their screen or use an online coding tool to write and ideally execute the code. At Shopify, we used Coderpad, which is an excellent tool. You select a language, write code on the left-hand side, and see the code execute on the right-hand side. Both the interviewer and candidate can modify the code simultaneously. However, this is a paid tool, so, driven to find a simple and free alternative, I started poking around on Reddit. That’s when I came across Codesandbox.io.&lt;/p&gt;
&lt;h2 id=&quot;codesandboxio&quot;&gt;Codesandbox.io&lt;/h2&gt;
&lt;p&gt;Based on a recommendation from Reddit, Codesandbox offers an excellent multi-person IDE and code execution environment for a wide variety of languages. Its assortment of templates makes it very easy to get an environment up and running with any language or framework to print “hello world.” When you go to create a new coding project, you can choose to use a Devbox—a small VM to run your code in. This provides a full VSCode-esque experience, allowing you to run client- or server-side code for many different languages. After just a few clicks, I had a React frontend and backend running and printing “Hello world” in seconds. By default, the window opens with the directory listing in a panel on the left, code in the centre, a browser running the app on the right, and a terminal along the bottom—everything you could ever need. Depending on the language or template you’re using, the layout can vary.&lt;/p&gt;
&lt;p&gt;Starting and sharing a live coding session appears to only be possible with Devboxes. Once a Devbox is running, you can easily share the current session with the candidate via a link. The candidate will need to create an account to use Codesandbox, but it’s otherwise quite frictionless. Once the interview is over, you can stop sharing the Devbox to prevent anyone, including the candidate, from accessing it. One caveat is that the Devbox runtimes consume credits while they’re powered on, but the free plan is quite generous. Each month, you get 400 credits for free, which provides about 57 hours of Devbox time per month if you use the lowest resource specs—perfect for simple programming challenges. Just make sure to hibernate the Devbox after use, or it could consume all your credits.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2025/01/codesandbox.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;does-it-work&quot;&gt;Does it work?&lt;/h2&gt;
&lt;p&gt;As of finishing this article, I’ve conducted a couple of technical interviews using Codesandbox to test candidates’ coding skills. Codesandbox was successful! Once the candidate gained access to the Codesandbox, it was easy to focus on the coding. I’d highly recommend this tool for others looking for a quick and effective solution to running technical interviews!&lt;/p&gt;</content:encoded></item><item><title>Thirty</title><link>https://jonsimpson.ca/thirty/</link><guid isPermaLink="true">https://jonsimpson.ca/thirty/</guid><description>Thirty</description><pubDate>Tue, 01 Oct 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It’s that time of year again, and this time it’s marking my thirtieth birthday. This marks 10 years of doing these types of yearly posts, which is a cool accomplishment to look back on.&lt;/p&gt;
&lt;p&gt;As I think about my major milestones, there are a few that I’m quite proud of, but don’t want to share due to wanting to live a private life. If you see me in person, feel free to ask what they were though!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2024/10/2024-patio.jpg&quot; alt=&quot;/static/images/2024/10/2024-patio.jpg&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;career&quot;&gt;Career&lt;/h2&gt;
&lt;p&gt;This time last year I was unemployed and living my best life. Taking a break from big tech was a well needed relief. Once the cold weather started coming and the travel and DIY plans all ceased, it was time to get back into focusing on my career. Long story short, while applying for jobs at some well known places, a ex-Shopify friend of mine reached out about joining his startup, Mantle. They wanted me specifically, their product was compelling, and the team of founders are jacked in the nerdiest and business sense way possible. Plus, I knew a bunch of them from my time at Shopify. It was an easy decision.&lt;/p&gt;
&lt;p&gt;A few days later and I was already hired. It’s been a great change working at Mantle, reminiscent of the fun I had during my first year working at Shopify, but even more startup-y with the mindset of just go and build or do the thing. No need to double-check with others - it’s all about the speed of shipping. Did I mention that I’m back to writing code every day? Yes, it’s quite fulfilling to be back into building every day given I spent the last 4.5 years at Shopify as a manager of one sort or another.&lt;/p&gt;
&lt;h2 id=&quot;cycling&quot;&gt;Cycling&lt;/h2&gt;
&lt;p&gt;I’ve had to step back my cycling a bunch, at least early this year, given a car accident I was in. I’ve fully healed up though, so was still able to get out when the nice weather arrived to tackle a number of fun rides and the ususal training. Since I travelled to France with my partner in April, the jetlag I had was actually beneficial for waking up early and getting outdoors for some quite early morning rides. Sadly this only lasted for four or so months until I got back into my usual habit of sleeping in.&lt;/p&gt;
&lt;p&gt;Being able to strap my bike on to the back of the car has been a great way to bike anywhere I want. There were several early morning rides I did out in Collingwood, specifically the 250 metre hill climb up Grey Road 19 that doesn’t offer much relief along the way combined with all the scenic backroads. One other day I rode to Creemore and back, combined with going to Thornbury which totalled up to just over 110k in a day. That day was tiring.&lt;/p&gt;
&lt;p&gt;Out in Vancouver, I finally hit up my buddy Vince who I worked with for a few years to go on one of the big rides he frequents, and to show me around the greater Vancouver area. I brought my cycling gear but left my bike at home, instead renting a road bike from a bike shop near English Bay Beach. Vince took me from downtown through Stanley Park, into West Van, where we then continued on the beautiful and punchy hills along Marine Drive. We continued on that road with the dozens of other cyclists to Whytecliff Park and took in the views before turning around to take a break at the nearby Isetta Cafe Bistro, a common stopping point for cyclists and yuppies to grab a bite to eat. My legs were jelly at this point, so it was a well needed break. Arriving back in Vancouver and that’s 70k in the bag.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2024/10/2024-vancouver-ride.jpg&quot; alt=&quot;/static/images/2024/10/2024-vancouver-ride.jpg&quot;&gt;&lt;/p&gt;
&lt;p&gt;I also participated again in the CN Ride for CHEO, the local children hospital for eastern Ontario’s fundraiser ride. My close friend and I decided to sign up for the 70k this year instead of the 35k. We didn’t know this year would be quite rainy and cold. Thank you Canadian spring weather. It felt like there were a thousand or so people doing the 70k with us. Anyways, within the first 5k someone in front of me freaked out at a crack in the road they could have easily rolled over and sent their tire directly into it sideways, causing them to fall on the ground, and causing me who was a foot back from them to topple over them too. Long story short, we both had a few scrapes but were both fine, my bike in perfect condition too, thankfully. I was still able to continue cycling, which was nice to do after all that. Since we were quite behind the pack, the organizers forced any laggards on the 70k to end the ride early and continue down the 35k route to the finish line.&lt;/p&gt;
&lt;h2 id=&quot;skiing&quot;&gt;Skiing&lt;/h2&gt;
&lt;p&gt;This year some friends and I decided to go big and invest in our own ski gear. Previous years we would only rent and try out a few different skis, and try to remember what we rode in. This year we had a good idea of what we preferred, and with the help of Reddit, had some great recommendations to cement our decisions. Boy, was this a game changer: having your own fitted boots is next level. Being able to stay in your boots for hours with no discomfort is something I never knew existed, but glad I have it now. Even a fresh pair of skis that I can repeatedly build up the muscle memory of how it performs in all conditions has greatly helped me become an even better skier this year.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2024/10/2024-ski-sunset.jpg&quot; alt=&quot;/static/images/2024/10/2024-ski-sunset.jpg&quot;&gt;&lt;/p&gt;
&lt;p&gt;I bought a full season’s pass to the nearby ski hill over on the Quebec side: Camp Fortune. They have a wide variety of lifts and challenging runs, even though it is still a hill. Since I was unemployed for part of this season I wanted to make the pass worth it and really increase my skill level. I ended up going out a few mornings each week for several weeks in a row, which absolutely helped with getting better and learning all the runs inside and out. For most of the days it was hard-packed trails, but on the odd day or two we were graced with powder. What a blast. As the season went on, more of the challenging trails opened up until the entire hill was open. I spent most of my time on the Skyline lift as the runs there were in my sweet spot of not too challenging but still fun to ride and those that I can challenge myself on. The Heggtveit run offered me the most growth when dealing with steep slopes, forcing me into learning how to angle myself even more into the slopes. Then it became a race for how fast I could go down it. Strava provided me with the satisfaction that all my “hard” work was paying off. I’m quite exicted for this next season, but will likely get a season’s pass at another hill to switch it up.&lt;/p&gt;
&lt;p&gt;Our now usual trip to Collingwood for a ski weekend at Blue Mountain happened again. I took some time off of work to ski during the “best” of the two days - slightly slushy runs during a weekday. It was a real testament to be able to hit every single run with confidence and speed without falling - even the double black diamond I remember tumbling down the season earlier. That last one felt the best. I was able to take my partner out to help improve her skiing as well. It was a beautiful day with some light powder. Let’s just say that Blue Mountain doesn’t have a great variety of beginner routes. It was a fun day at least!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2024/10/2024-ski-lift.jpg&quot; alt=&quot;/static/images/2024/10/2024-ski-lift.jpg&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;travel&quot;&gt;Travel&lt;/h2&gt;
&lt;p&gt;It’s been a big year for travel. My birthday last year was a few days spent in Prince Edward County enjoying the fine beer, wine, and food with my wonderful partner. Definitely a place to keep exploring, especially if the weather is good enough to go around by bike.&lt;/p&gt;
&lt;p&gt;In February, my partner and I travelled to Punta Cana, Dominican Republic to stay at a friend’s place that they were renting for the entire month. It was a great change relaxing at a condo full of expats rather than a resort packed full of people. The days were spent reading on the beach, hitting up the few dozen restaurants in the area, and spending time with friends. One day we had a scenic boat excursion out to do some snorkelling - no scuba diving on this trip or this year though.&lt;/p&gt;
&lt;p&gt;In early spring we spent a week out in France. This was my first time out there, so it was a good feeling uncanny valley everywhere I went. Paris had to be my favourite big city. I would choose it over New York since every place you went there would always be good food, and the transit was nice and convenient. We split this trip into a few parts. The first bit doing some of the scenic bits of France since it was my first time out there: Eiffel Tower stair climb. Next was Reims, which was one of the main cities in the region of Champagne. We tasted a lot of champagne, explored the city, and took our rental car for some drives around. Last stop was Colmar, a city close to the border of Germany which contained quite a few historic towns right out of the 15th century. It was crazy picturesque. We stopped at a random family winery in the area too and discovered a great new style: Viognier. Overall, the food and wine were some of the best parts of this trip.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2024/10/2024-france.jpg&quot; alt=&quot;/static/images/2024/10/2024-france.jpg&quot;&gt;&lt;/p&gt;
&lt;p&gt;Each summer feels like it gets busier. This summer was no exception. Right after France, each week or weekend we had something going on. Whether it was spending a few long weekends or entire week at the cottage, going out to Vancouver for a week to visit friends and do some cycling, a weekend in Montreal, a birthday in Niagara on the Lake, a week in Collingwood, or work events in Toronto, by the time August rolled around, it felt like no time has been spent at home to relax. Even my bags didn’t really get put away, since there was somewhere else to go. I was certainly glad to spend several weekends enjoying the tail end of summer in Ottawa, where the patios and seasonally late warm weather was enjoyed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2024/10/2024-vancouver.jpg&quot; alt=&quot;/static/images/2024/10/2024-vancouver.jpg&quot;&gt;&lt;/p&gt;
&lt;h2 id=&quot;this-next-year&quot;&gt;This next year&lt;/h2&gt;
&lt;p&gt;I’m quite looking forward to this next year. Working at my first startup is exicting and really engaging to help grow Mantle by building awesome product and enabling great customers. My fingers are crossed for the weather working in favour of a killer ski season this year as I plan to sample all of the nearby hills, and maybe a big trip or two. I already have some epic travel plans, some already planned out! Lastly, with the cold weather arriving I’m excited to jump back on my bike trainer and get some good hours in. See you next year!&lt;/p&gt;</content:encoded></item><item><title>Gotchas with the HubSpot API</title><link>https://jonsimpson.ca/gotchas-with-the-hubspot-api/</link><guid isPermaLink="true">https://jonsimpson.ca/gotchas-with-the-hubspot-api/</guid><description>Gotchas with the HubSpot API</description><pubDate>Sat, 22 Jun 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;One of my first projects I worked on at Mantle was building an integration with HubSpot. HubSpot is basically a powerful CRM for sales, marketing, support, and several other aspects of running a business. The main goal of the integration comprised Mantle customers being able to push their data into their HubSpot instance. This was a common request from several customers as Mantle is a rich datasource and HubSpot is where a lot of their email marketing, sales, and reporting was done.&lt;/p&gt;
&lt;p&gt;As I had no past experience integrating with the HubSpot API, this was all new to me and took longer than expected to figure out how to develop on their platform. Beyond the HubSpot API docs being hard to comprehend, there was a few gotchas I experienced while building out the functionality needed to push customer data into HubSpot. Here’s several notable gotchas I ran into.&lt;/p&gt;
&lt;h2 id=&quot;which-hubspot-account-am-i-connected-to&quot;&gt;Which HubSpot account am I connected to?&lt;/h2&gt;
&lt;p&gt;I needed to know which HubSpot account was actually connected to after the user performed the oAuth connection. This was quite useful to know since I had a mapping of Mantle customers to HubSpot contacts, and reconnecting HubSpot with a different account could potentially mess up data in HubSpot.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;/account/info&lt;/code&gt; &lt;a href=&quot;https://developers.hubspot.com/docs/api/settings/account-information-api&quot;&gt;api endpoint&lt;/a&gt; does have the &lt;code&gt;portalId&lt;/code&gt; field available, which is just an integer representing the account, but otherwise no nice name field exists there. Instead it’s possible to get the HubSpot account name from the oauth access or refresh token &lt;a href=&quot;https://developers.hubspot.com/docs/api/oauth/tokens&quot;&gt;inspection api endpoint&lt;/a&gt;. The field is called &lt;code&gt;hub_domain&lt;/code&gt;. &lt;code&gt;hub_id&lt;/code&gt; (the same as &lt;code&gt;portalId&lt;/code&gt;)  is also available on this endpoint, so no need to call the &lt;code&gt;/account/info&lt;/code&gt; endpoint.&lt;/p&gt;
&lt;p&gt;Right after a successful oauth connection, I perform a request to the token inspection API to fetch these values.&lt;/p&gt;
&lt;h2 id=&quot;optional-api-scopes&quot;&gt;Optional api scopes&lt;/h2&gt;
&lt;p&gt;Some api scopes are only available on some plans. There’s no way to check which plan a person is on - probably because the features can be pick-and-choose. If you’re using scopes that aren’t available on all plans then you should put them in &lt;code&gt;optional_scopes&lt;/code&gt; if the app should still work without them. Hubspot has a crappy, unhelpful message whenever an app requires a scope that the org doesn’t have that is displayed during the oauth flow.&lt;/p&gt;
&lt;h2 id=&quot;dealing-with-custom-field-dropdownenum-values&quot;&gt;Dealing with custom field dropdown/enum values&lt;/h2&gt;
&lt;p&gt;Many systems implement some form of a field having a known set of values that can be set. &lt;a href=&quot;https://knowledge.hubspot.com/properties/property-field-types-in-hubspot&quot;&gt;HubSpot has this field-type&lt;/a&gt; available for checkboxes, radio buttons, and dropdowns. They’re all powered by their datatype called enumerable.&lt;/p&gt;
&lt;p&gt;Say you want to provide a user within HubSpot the ability to easily create a report or list of customers based on the data you push into HubSpot. Being able to use a dropdown, checkboxes, or radio buttons provides a much easier experience over manually typing in values into a text box. This better reality is a pain to accomplish given you need to tell HubSpot all of the known possible values up-front when first creating the field, and then later on whenever the values may change. If its a static set of known values that won’t ever change, perfect, you’re good to use this great feature. If the values are user-created or can change over time, you’re in trouble. HubSpot won’t calculate the known values by itself, so you’re forced to keep track of the state of HubSpot’s known values for that field are, or avoid the extra complexity and just stick with a text field.&lt;/p&gt;
&lt;h2 id=&quot;modelling-data-in-custom-fields&quot;&gt;Modelling data in custom fields&lt;/h2&gt;
&lt;p&gt;Its great to be able to create custom fields with different datatypes, but when it comes to storing a set of tuples? ie. many key-values. How can you do it, and what’s most useable in HubSpot? As an example, we want to store data about which apps a user has and their respective plans. The example uses two apps, each with their own plan name.&lt;/p&gt;
&lt;h3 id=&quot;option-1-use-a-single-text-field-and-separate-multiple-values-with-a-newline-or-some-other-character&quot;&gt;Option 1: Use a single text field and separate multiple values with a newline or some other character&lt;/h3&gt;
&lt;p&gt;Eg.
Field name: &lt;code&gt;App plan name&lt;/code&gt;
Value: &lt;code&gt;App 1 - Pro\n App 2 - Custom&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Conservative use of fields by only using one&lt;/li&gt;
&lt;li&gt;Very compact display if shown on the contact’s page&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When using the field in reports, workflows, etc. would have to use a contains operator and then match on the string &lt;code&gt;App 2 - Custom&lt;/code&gt;, or multiple combinations of that for multiple types of values&lt;/li&gt;
&lt;li&gt;Always have to use text type&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;option-2-use-multiple-fields-for-each-app-and-store-the-value-in-each&quot;&gt;Option 2: Use multiple fields for each app, and store the value in each&lt;/h3&gt;
&lt;p&gt;This method gives you the flexibility of giving each key its own field to store whichever values it needs. An enumerable could be used for the values, and this would result in quite easy report building and searching abilities since each field is a distinct app and all of the potential values are present from the HubSpot interface.&lt;/p&gt;
&lt;p&gt;There does come some downsides such as needing to be cognizant of there being an account-wide 1000 custom field limit, and if you decide to use the enumerable type, keeping the potential values in sync as they may change over time. But we know that can be more of a chore than practical.&lt;/p&gt;
&lt;p&gt;Eg.
Field name 1: &lt;code&gt;app 1 plan name&lt;/code&gt;
Value 1: &lt;code&gt;Pro&lt;/code&gt;
Field name 2: &lt;code&gt;app 2 plan name&lt;/code&gt;
Value 2: &lt;code&gt;Custom&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Pros:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One field to easily match on with the equals operator&lt;/li&gt;
&lt;li&gt;Potential for using dropdown/enum field&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Could reach the 1000 custom field limit if there’s many keys&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&quot;option-3-hubspot-custom-objects&quot;&gt;Option 3: HubSpot custom objects&lt;/h3&gt;
&lt;p&gt;If you’re on the HubSpot enterprise plan, you’ve got the ability to use custom objects. They’re supposedly a way to model any sort of data within HubSpot and connect it to HubSpot’s existing objects like Contacts, Companies, etc.&lt;/p&gt;
&lt;p&gt;I haven’t explored this option too deeply as the requirement for the Enterprise plan was a deal breaker for the variety of customers I was building this integration for.&lt;/p&gt;
&lt;h3 id=&quot;picking-one&quot;&gt;Picking one&lt;/h3&gt;
&lt;p&gt;Whichever way you go, pick a key that won’t change. With the app name example, that can probably change over time, so if there’s a slug, or non-modifiable field, go with that.&lt;/p&gt;
&lt;p&gt;I ended up going with option 1, using a single field to hold all keys and values, since this was the lowest common denominator where the custom field use was conservative. It still offered the ability to use these values in reports, but with a higher level of work compared to option 2: having a custom field for each key.&lt;/p&gt;
&lt;h2 id=&quot;querying-contacts-by-email&quot;&gt;Querying contacts by email&lt;/h2&gt;
&lt;p&gt;Most surprising of all, I was querying for a contact record with one specific email and was getting back a record with a different email. This was highly confusing and was only happening with one of our customers, and for one of their contacts.&lt;/p&gt;
&lt;p&gt;For example, search for a contact with an email of &lt;code&gt;me@example.com&lt;/code&gt;, and get back a contact with their email saying its &lt;code&gt;you@example.com&lt;/code&gt;. This ended up being a &lt;a href=&quot;https://knowledge.hubspot.com/records/add-multiple-email-addresses-to-a-contact&quot;&gt;“feature” of HubSpot&lt;/a&gt; where contacts can have many secondary emails attached, and when searching for a contact by email, these secondary emails are also searched. Because I needed to confirm the returned contact’s email, I needed to modify my query to also return all the secondary emails.&lt;/p&gt;
&lt;p&gt;The docs absolutely suck for telling you about it. From my investigation only the forums and old docs site have a reference to it. The field is called &lt;code&gt;secondaryEmails&lt;/code&gt; and can be included in the returned properties when querying contacts.&lt;/p&gt;
&lt;h2 id=&quot;multiple-doc-sites-and-just-docs-in-general&quot;&gt;Multiple doc sites, and just docs in general&lt;/h2&gt;
&lt;p&gt;This last one is just laziness. The old documentation site still lurks around and often shows up high in the search results.  The API docs there are often for a prior version of the API, or just contain even less info than the corresponding page on the current docs site. Why not kill this old version already?&lt;/p&gt;
&lt;p&gt;The new docs site is also pretty crappy. Knowing what the right inputs are and what the output schema could be is a matter of putting together several code samples and using the sad iframed API endpoint explorer. Please bring your documentation platform into this decade by specifying all the potential inputs and outputs to your APIs, and provide some better UX.&lt;/p&gt;
&lt;p&gt;As I’m writing this, HubSpot &lt;a href=&quot;https://developers.hubspot.com/beta-docs&quot;&gt;now has a beta&lt;/a&gt; of their new documentation site. The UX is a bit more modern, viewing API examples are easier, and interacting with the API looks better. Good job, HubSpot.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;That’s all of the troubles I’ve run into so far while developing an integration for HubSpot. I hope these serve as useful pointers for others as they build their own integrations, or at least for myself when I have to go add something new to this integration.&lt;/p&gt;</content:encoded></item><item><title>Hardware Accelerated Plex Transcoding with consumer GPUs in Dell Servers</title><link>https://jonsimpson.ca/hardware-accelerated-plex-transcoding-with-consumer-gpus-in-dell-servers/</link><guid isPermaLink="true">https://jonsimpson.ca/hardware-accelerated-plex-transcoding-with-consumer-gpus-in-dell-servers/</guid><description>Hardware Accelerated Plex Transcoding with consumer GPUs in Dell Servers</description><pubDate>Tue, 05 Dec 2023 20:02:25 GMT</pubDate><content:encoded>&lt;p&gt;One of my recent projects was to improve my Plex media streaming experience by figuring out how to install a consumer graphics card into the &lt;a href=&quot;https://jonsimpson.ca/building-a-homelab-a-walk-through-history-and-investing-in-new-hardware/&quot;&gt;Dell server I built a few years ago&lt;/a&gt;. The main problem I was facing was being able to watch any sort of 4k videos on devices which only viewed content at 1080p, or being away from home with a slow internet connection that couldn’t stream the high bitrates of 4k content. When these situations occurred, the Dell server’s CPU would kick in to transcode the 4k content down into a lower resolution for streaming. Most of the time, the CPU would start struggling to transcode the video at a fast enough rate, causing constant buffering of the video. All 24 of the CPU’s cores would be maxed out, but that’s still not enough for modern video codecs and bitrates.&lt;/p&gt;
&lt;p&gt;The solution was simple enough in theory. Get a graphics card, install it into the server, and configure Plex to use it. Should be a pretty quick process, right? It took me a few weeks to source the right parts, some crafty work, and lots of research to make sure I was doing things right, and not blow the server up. More on that last part later. Make sure you have Plex Pass if you’re following along.&lt;/p&gt;
&lt;h2 id=&quot;initial-graphics-card&quot;&gt;Initial graphics card&lt;/h2&gt;
&lt;p&gt;I had a few graphics cards lying around, so why not try those in the server? Pop the latest one in and discover that it won’t actually speed up video transcoding. It turns out that to be able to support video transcoding, specifically encoding and decoding of video, a pretty recent (in the last several years) graphics card needs to be used. A great reference that everyone uses for graphics card compatibility for Plex transcoding &lt;a href=&quot;https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new&quot;&gt;is this Nvidia page&lt;/a&gt;. For Nvidia graphics cards, Plex needs the card to support the &lt;em&gt;nvenc&lt;/em&gt; (encoding) and &lt;em&gt;nvdec&lt;/em&gt; (decoding) functionality for the specific video codecs of your content. I don’t have firsthand experience with AMD graphics cards, but imagine they follow a similar path as I went through.&lt;/p&gt;
&lt;p&gt;Most of the video content these days is encoded as either h.264 (AVC) or the newer h.265 (HEVC) codecs. Plex can tell you which codec your videos are using. Not common at all yet, but a successor of h.265 is AV1, which is only supported on some of the latest graphics cards. It will be years before content starts showing up as AV1, and even longer for the mass majority of content defaulting to AV1. We’re still split between mostly h.264 with an increasing amount of content showing up as h.265 now.&lt;/p&gt;
&lt;p&gt;Looking up the graphics card that I had in Nvidia’s compatibility matrix, it was clear that it was too old and didn’t support any of the necessary encoding and decoding of h.264 and h.265 videos. Time to find a card that would suffice, and not break the bank.&lt;/p&gt;
&lt;h2 id=&quot;gtx-1660-ti&quot;&gt;GTX 1660 Ti&lt;/h2&gt;
&lt;p&gt;From looking at the compatibility matrix, the Geforce GTX 1660 Ti stood out to me as a good balance of power usage, compatibility for both nvenc and nvdec for h.264 and h.265, and a relatively low price on Facebook Marketplace of $140. This card does require extra power from a 8-pin pci-e power cable, but the Dell server looks to have one of those available. I also looked into an RTX 4060 or newer for future-proofing with AV1 codec support, but they’re still quite pricey even on the aftermarket. GTX 1660 Ti it is. I didn’t even bother looking into buying a Nvidia Quadro graphics card, but some of them do show up for sale on secondhand marketplaces. They should work as well.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/PXL_20231124_165748809.jpg&quot; alt=&quot;&quot;&gt;&lt;img src=&quot;/static/images/2023/12/PXL_20231124_165801700.jpg&quot; alt=&quot;&quot;&gt;Immediately after getting the card I had to figure out how to connect the 8-pin female port on the server’s power supply to the 8-pin female port on the graphics card. Usually in consumer desktop hardware, the power supply would already have a free 8-pin cable available, making this process of plugging the graphics card in very easy. The server didn’t have this extra cable available. It was either buy a male to male 8-pin cable off Amazon or Ebay and wait for it to arrive weeks later, or fudge something together by hand. I chose the latter as this was faster, and how hard could it be, right?&lt;/p&gt;
&lt;p&gt;I soon discovered a part of the internet where people were discussing powering GPUs in desktops, powering GPUs in servers, &lt;a href=&quot;https://superuser.com/questions/1000679/i-plugged-an-8-pin-eps-cpu-cable-into-the-gpus-pci-e-port-is-it-damaged&quot;&gt;people frying their hardware&lt;/a&gt;, similar but incompatible 8-pin power standards, &lt;a href=&quot;https://electronics.stackexchange.com/questions/465726/what-are-sense-pins-in-8-pin-pci-express-power-plug&quot;&gt;what are these “sense” pins&lt;/a&gt;, and more. This quickly brought up the seriousness of what I was getting in to. I don’t want to damage any of my hardware if I use the wrong cable or provide too much power to the video card.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/gpucpu-pinout.png&quot; alt=&quot;&quot;&gt;After some reading, I discovered that there’s two very similar looking 8-pin power connectors for hardware inside of computers. There’s the pci-e 8-pin standard which is widely used for graphics cards. It has three wires that run 12v, and the rest are all ground. Then there’s the other 8-pin standard called EPS-12v which looks incredibly similar, but has four 12v wires and four ground wires. It would be bad to connect a pci-e 8-pin port to a EPS-12v port – you’ll damage some hardware that way.&lt;/p&gt;
&lt;p&gt;After more reading, I found &lt;a href=&quot;https://electronics.stackexchange.com/questions/590781/confirm-if-dell-poweredge-r720-power-port-mixes-pin-layout-wiring-of-pcie-and-ke&quot;&gt;someone doing something similar&lt;/a&gt; in a newer generation Dell server than mine. Their experience and concern brought up the fact that some servers can provide an 8-pin power port for graphics cards, but use a different layout of which pins provide power and which provide ground. There’s no closure to how it went for them, but this gave me the idea to follow in their steps and use a multimeter to inspect what voltages are actually going through each of those pins. This would influence my next steps of whether building a custom cable would work for my use case.&lt;/p&gt;
&lt;p&gt;The goal was to determine the voltages coming out of the server’s EPS-12v connector’s pins. I familiarized myself with using a multimeter from a few different articles and videos, and trying out what I’ll be doing on the server with a spare PC, as I didn’t have much experience with electronics at this level. I also didn’t want to break anything in the process. After experimenting with checking the voltages on the spare computer’s 8-pin pci-e cable, I felt confident enough to inspect the server. I ended up with some surprising but useful results.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/PXL_20231124_1657488092.jpg&quot; alt=&quot;&quot;&gt;It turned out that the Dell server has a EPS-12v connector. Written beside it on the circuit board is GPU POWER, leading me to believe it should work for GPUs. When checking the pins, instead of four 12v pins, there was actually three 12v pins – similar to 8-pin pci-e. This was a big warning that plugging anything into this port should be done with lots of consideration as this isn’t actually a EPS-12v power connector! It’s a pci-e 8-pin in disguise.&lt;/p&gt;
&lt;p&gt;At this point, it was clear that there was the right number of 12v wires and ground wires for a graphics card to theoretically work in this server. I bought a few pci-e 8-pin cable extenders that would fit into the server’s EPS-12v port and graphic card’s pci-e port. What I needed to do was splice the female-to-male cables into a male-to-male cable, with the 12v and ground pins in the right orientation. This required a lot of patience and triple checking that the right wires were in the right orientation. I only had one shot at this, as failure would potentially fry parts of the server.&lt;/p&gt;
&lt;p&gt;Another multimeter trick I picked up was testing the continuity of a wire – basically can electricity flow from one end of the wire to another. This helped with verifying that my splicing of the power cable with butt splice connectors was solid, and that there weren’t any wires somehow crossing each other.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/PXL_20231201_011831498-edited-scaled.jpg&quot; alt=&quot;&quot;&gt;Once I was confident enough with my custom cable creation, it was time to proceed with the riskiest part: install the GPU and its custom power cable into the server. I found an afternoon to take the server offline, remove all the hard drives in case of electrical failure, plug the new cable in, test the voltages again, and plug the graphics card in. I monitored the server’s vitals and boot-up via the iDRAC remote management interface from my laptop. It started up and worked like a charm. As the stress and tension of massive hardware failure departed, it was time to move on to putting the server back together and moving over to the OS configuration side of things.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/PXL_20231201_211917879.jpg&quot; alt=&quot;&quot;&gt;As we can see in the above image, the graphics card and its power cable are successfully installed. Not pictured is a big plastic piece that covers the RAM and processors to better direct the airflow – it won’t fit anymore with this big of a GPU present. Also note how tall the graphics card is. It’s almost sitting on one CPU’s heatsink, while the top is almost flush with the top of the case. Of course there’s no holes in the case for the GPU’s fans to get air from, but there’s thankfully a small enough gap between the graphics card and the top case to provide enough airflow.&lt;/p&gt;
&lt;h3 id=&quot;aside-dells-quick-release-for-pci-cards-sucks&quot;&gt;Aside: Dell’s quick release for pci cards sucks&lt;/h3&gt;
&lt;p&gt;After the successful power test, I needed to move around some pci cards before closing the case. I quickly found out that the quick release mechanism which slides down over the sides of the pci cards was stuck. There’s some small metal bumps that stick out to provide force on the pci cards to keep them well-seated. Well, this just got stuck on the graphics card, and now won’t loosen. The graphics card was now stuck in the pci slot. After some Googling, it doesn’t seem like anyone else has faced this issue, and brute force wasn’t going to bend this metal bump out of the way. The only thing that breaks is your skin on its sharp metal. I ended up ever so carefully bending the server’s pci riser card out of its slot and the graphic’s cards, cutting part of the case to free one side of the graphics card, then pulling the graphics card free. With this now free, I cut off the quick release mechanism so that it would never happen again.&lt;/p&gt;
&lt;p&gt;What a piece of crap.&lt;/p&gt;
&lt;h2 id=&quot;nvidia-drivers&quot;&gt;Nvidia drivers&lt;/h2&gt;
&lt;p&gt;Now that the graphics card was installed, powered up, and the system would boot normally, it was time to get the Nvidia drivers working. Some context about my Plex setup: it runs in a docker container on Ubuntu server. Part of enabling Plex running in a docker container is to install the Nvidia drivers on the Ubuntu host, as well as the nvidia-container-toolkit package too.&lt;/p&gt;
&lt;p&gt;I had hoped that installing these drivers would be easy, but one can dream. It’s painful, especially if your server is headless, and you’re avoiding installing xserver, the basis for window managers.&lt;/p&gt;
&lt;p&gt;If you’re going through the same process, I recommend giving the &lt;code&gt;ubuntu-drivers&lt;/code&gt; tool a try to install your drivers first. It seems &lt;a href=&quot;https://ubuntu.com/server/docs/nvidia-drivers-installation&quot;&gt;well recommended and documented by Ubuntu&lt;/a&gt;, though didn’t come preinstalled in my version of Ubuntu. I didn’t have any success with this method, and instead manually installed a bunch of packages recommended by a sleuth of places online. The following is what worked for me.&lt;/p&gt;
&lt;p&gt;I followed the instructions on &lt;a href=&quot;https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html&quot;&gt;this Nvidia page&lt;/a&gt; to get the &lt;code&gt;nvidia-container-toolkit&lt;/code&gt; package installed and configured for both docker and containerd support (since I had both installed on my system). Then the following packages installed the drivers, the libs for encoding/decoding, and utils for the &lt;code&gt;nvidia-smi&lt;/code&gt; tool.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$ sudo apt install nvidia-dkms-535-server libnvidia-decode-535 libnvidia-encode-535 nvidia-utils-535&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;A reboot later and the &lt;code&gt;nvidia-smi&lt;/code&gt; command, a way to see the status of any Nvidia GPUs on the system, showed that the graphics card was working. If this wasn’t showing anything then there’s likely a problem with the hardware or the packages that were installed.&lt;/p&gt;
&lt;h2 id=&quot;configuring-docker&quot;&gt;Configuring Docker&lt;/h2&gt;
&lt;p&gt;Now that the host can see the graphics card, its time to configure docker and the Plex container to use it. &lt;a href=&quot;https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/&quot;&gt;The Plex docs&lt;/a&gt; and &lt;a href=&quot;https://github.com/plexinc/pms-docker#intel-quick-sync-hardware-transcoding-support&quot;&gt;linked docker-specific docs&lt;/a&gt; have a good overview of enabling hardware transcoding.&lt;/p&gt;
&lt;p&gt;For the normal &lt;code&gt;docker&lt;/code&gt; command, the &lt;code&gt;--gpus all&lt;/code&gt; flag is all you need to specify. I use the LinuxServer brand of docker images, and &lt;a href=&quot;https://docs.linuxserver.io/images/docker-plex/#nvidia&quot;&gt;their Plex docs&lt;/a&gt; recommend a few different options all specified. Those are &lt;code&gt;--runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all&lt;/code&gt; which automatically mounts the GPU and drivers into the container. It seems like the &lt;code&gt;--gpus&lt;/code&gt; flag is newer and built in to docker – it might deprecate the &lt;code&gt;--runtime=nvidia&lt;/code&gt; method. I’ll stick with what LinuxServer recommends until they change. I use docker-compose to manage Plex for me, so the command line options to run Plex on docker are slightly different and defined in yaml files.&lt;/p&gt;
&lt;h2 id=&quot;configuring-plex&quot;&gt;Configuring Plex&lt;/h2&gt;
&lt;p&gt;At this point, recreating the Plex container should expose the GPU to Plex. Following the &lt;a href=&quot;https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/&quot;&gt;Plex docs&lt;/a&gt; on enabling hardware transcoding should make it so that any transcoding that Plex needs to do will use the GPU instead of the CPU. In the &lt;em&gt;Transcoder&lt;/em&gt; settings section you should also see your graphics card present in the &lt;em&gt;Hardware Transcoding Device&lt;/em&gt; dropdown.&lt;/p&gt;
&lt;p&gt;Now go and try transcoding a video on your TV or phone. Avoid trying it out with the Plex web UI, there’s a history of transcoding issues that myself and others are facing. You should be able to successfully transcode 4k videos to different resolutions without breaking a sweat. Running the &lt;code&gt;nvidia-smi&lt;/code&gt; command on the host should show that a Plex process is using the GPU.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/Monosnap-Plex-2023-12-05-10-25-24.png&quot; alt=&quot;&quot;&gt;Here’s what I have set for my transcoding settings. Plex has these configuration options &lt;a href=&quot;https://support.plex.tv/articles/200250347-transcoder/&quot;&gt;well documented&lt;/a&gt;. I’ve noticed that the &lt;em&gt;Transcoder Quality&lt;/em&gt; option can be set to its highest and still perform completely fine without exhausting my GPU and CPU. I don’t have many concurrent videos being streamed, so I haven’t been able to bottleneck this setup.&lt;/p&gt;
&lt;h3 id=&quot;configure-plex-to-use-a-ramdisk-for-temporary-transcode-files&quot;&gt;Configure Plex to use a ramdisk for temporary transcode files&lt;/h3&gt;
&lt;p&gt;One of the quick and instantaneous speedups to watching a transcoded video was switching over the Plex temporary transcode directory to be &lt;code&gt;/dev/shm&lt;/code&gt; – Linux’s temporary filesystem stored in memory. When set to nothing, it uses the default Plex application support directory. I had to expose &lt;code&gt;/dev/shm&lt;/code&gt; to the docker container for this to work, as the container doesn’t come with it by default, and update the setting in the &lt;em&gt;Transcode&lt;/em&gt; settings page in Plex. After this was enabled, I immediately noticed it was much quicker to seek forward and backwards in a video being transcoded. Almost as if the video wasn’t being transcoded at all!&lt;/p&gt;
&lt;h2 id=&quot;transcoding-for-plex-web-issue&quot;&gt;Transcoding for Plex web issue&lt;/h2&gt;
&lt;p&gt;As mentioned earlier, myself and others have had a lot of difficulty watching transcoded videos when using the Plex web UI. Transcoding works perfectly everywhere else. There’s comedically long &lt;a href=&quot;https://www.reddit.com/r/PleX/comments/hlcuq9/hardware_transcoding_starting_and_then_dying/&quot;&gt;Reddit&lt;/a&gt; and &lt;a href=&quot;https://forums.plex.tv/t/converting-to-a-lower-quality-fails-with-hardware-accelerated-streaming-in-plex-web/610026/105&quot;&gt;Plex forum&lt;/a&gt; posts to help debug this exact issue, and its still going on. I haven’t been able to find a fix, but it’s likely on the Plex web side of things. Hopefully the Plex developers will be able to fix this.&lt;/p&gt;
&lt;p&gt;Instead of starting a video, then switching the quality over to a transcoded version, there is a workaround for starting a video at the necessary transcoded resolution. I’ve had this work, but it’s a pain.&lt;/p&gt;
&lt;h2 id=&quot;gpu-load-testing&quot;&gt;GPU load testing&lt;/h2&gt;
&lt;p&gt;As a fun last thing to do, there’s simple tools out there to stress test your GPU over a period of time. I came across the &lt;a href=&quot;https://github.com/wilicc/gpu-burn&quot;&gt;wonderful &lt;em&gt;gpu-burn&lt;/em&gt; project&lt;/a&gt; which provides a easy way to stress test your GPU. Someone has created a &lt;a href=&quot;https://hub.docker.com/r/chrstnhntschl/gpu_burn&quot;&gt;pretty popular docker image&lt;/a&gt; that can easily be pulled to run this command. It can be run via a&lt;/p&gt;
&lt;p&gt;&lt;code&gt;docker run --gpus all --rm chrstnhntschl/gpu_burn 120&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;where 120 is the number of seconds to run the test for. I found that running the test for 10 seconds or less didn’t actually provide enough time for the test to start up and run. Running it for a few minutes or more is best.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/12/Monosnap-jon@serv01-2023-12-05-12-44-42.png&quot; alt=&quot;&quot;&gt;In my case, Plex would only take up around 600 MB of GPU memory when transcoding a 4k video, and would take up less than 10% utilization, with power staying around 25w and thermals around 30 celsius.&lt;/p&gt;
&lt;p&gt;As I’m running the stress test, I’m keeping an eye on the output of &lt;code&gt;nvidia-smi&lt;/code&gt;. It really does stress your GPU to the max. I was seeing most memory being used, 100% utilization, 100% power usage, and thermals around 60 celsius. Not so bad for a consumer GPU jammed into a server with a handmade power cable. After the test I noticed the temperature gracefully decreasing all the way down to 22 celsius – that’s a testament of decent airflow design in the server. I’ll likely never see the GPU being utilized this much, but it’s good to know that the server can handle it.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;My use case of having an enterprise-grade Dell server run a consumer-grade GPU for Plex transcoding ended up turning into quite a longer journey than initially thought. I would not have expected some things such as the graphics card getting stuck in the case, or building my own power cable, but there’s some constants in technology such as drivers always being troublesome, or software bugs lurking around the corner.&lt;/p&gt;
&lt;p&gt;Time to go enjoy the fruits of my labour.&lt;/p&gt;</content:encoded></item><item><title>Growing Teams with the SBI Model: A Framework for Effective Feedback</title><link>https://jonsimpson.ca/growing-teams-with-the-sbi-model-a-framework-for-effective-feedback/</link><guid isPermaLink="true">https://jonsimpson.ca/growing-teams-with-the-sbi-model-a-framework-for-effective-feedback/</guid><description>Growing Teams with the SBI Model: A Framework for Effective Feedback</description><pubDate>Wed, 16 Aug 2023 20:45:41 GMT</pubDate><content:encoded>&lt;p&gt;As a manager, one of our key responsibilities is guiding the growth and development of your team members. Providing timely and impactful feedback is crucial in helping your reports enhance their skills and continuously improve their work. The SBI model is my go-to framework for giving both positive and critical feedback. I’ll go into the key aspects of this model and share a few personal takeaways to help you effectively implement it in your role as a manager.&lt;/p&gt;
&lt;p&gt;If you’re familiar with the SBI model for feedback, feel free to &lt;a href=&quot;#takeaways&quot;&gt;skip to the takeaways section&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;the-sbi-model-explained&quot;&gt;The SBI Model Explained&lt;/h2&gt;
&lt;p&gt;The SBI model, which stands for Situation, Behaviour, and Impact, is a structured approach to giving feedback. This model provides a clear structure and ensures that your feedback is specific, actionable, and focused on the effect it has on the recipient and the team as a whole.&lt;/p&gt;
&lt;h3 id=&quot;1-situation&quot;&gt;1. Situation&lt;/h3&gt;
&lt;p&gt;Start by describing the specific situation or context in which the behaviour occurred. This helps the recipient understand the context and identify the exact incident or scenario being referred to. For example, &lt;em&gt;“During yesterday’s team meeting when discussing the project timeline…”&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&quot;2-behavior&quot;&gt;2. Behavior&lt;/h3&gt;
&lt;p&gt;Next, describe the behaviour or action that you observed. Be objective and focus on observable actions rather than subjective interpretations. Phrases like “I noticed that…” or “You did/said…” can be helpful in this step. For example, “I noticed that you interrupted your colleague multiple times while they were presenting their ideas…”&lt;/p&gt;
&lt;h3 id=&quot;3-impact&quot;&gt;3. Impact&lt;/h3&gt;
&lt;p&gt;Finally, explain the impact of the observed behaviour on the individual, the team, or the project. Highlight the pros and cons. This step helps the recipient understand the significance of their actions and the importance of change. For example, “This behaviour may have made your teammate feel disregarded and demotivated, hindering collaboration within the team.”&lt;/p&gt;
&lt;p&gt;This part is key as it explains what the benefit or consequences of the behaviour were. Ideally, the individual will acknowledge what they’ve done and treat this as a valuable takeaway.&lt;/p&gt;
&lt;h2 id=&quot;personal-takeaways-for-effective-feedback&quot;&gt;Personal Takeaways for Effective Feedback&lt;/h2&gt;
&lt;p&gt;After practicing the SBI model for several years to provide both positive and critical feedback, here are a number of takeaways that I’ve learned.&lt;/p&gt;
&lt;h3 id=&quot;lay-the-foundation-for-giving-and-receiving-feedback&quot;&gt;Lay the foundation for giving and receiving feedback&lt;/h3&gt;
&lt;p&gt;When meeting a new team member, or working with someone new, it can be helpful to be explicit about your intention to help them out by sharing your intention of providing them feedback in the future. It’s also helpful to mention that you’re open to receiving feedback. This has a threefold effect: showing that you’re interested in supporting their success, preparing them to be open to receiving feedback, and the potential for yourself to receive feedback.&lt;/p&gt;
&lt;h3 id=&quot;write-the-feedback-out-beforehand&quot;&gt;Write the feedback out beforehand&lt;/h3&gt;
&lt;p&gt;Before giving feedback, take some time to reflect and organize your thoughts. Writing out everything you plan to say helps ensure that your feedback is clear and concise. It also allows you to focus on the delivery of the feedback rather than worrying about the mechanics of the feedback model. This preparation enhances the effectiveness of your delivery. Over time you’ll get better at this, both determining which moments feedback should be provided, and being able to do it off the top of your head. When face to face with the recipient, mentioning &lt;em&gt;that you have collected your thoughts by writing down the feedback and will read it back now&lt;/em&gt; can reduce the awkwardness of sounding like you’re reading something that has been written down.&lt;/p&gt;
&lt;h3 id=&quot;build-up-a-habit-of-providing-positive-feedback&quot;&gt;Build up a habit of providing positive feedback&lt;/h3&gt;
&lt;p&gt;Not everyone will be used to receiving positive or constructive feedback from you. Its impossible to know how they’ll respond, and how open they’ll be. One way to work at building up a repertoire of providing feedback is to start by providing a few genuine positive feedbacks to the recipient over a period of time. An unsolicited message can do it. Over time this builds up the trust and shows that you’re supportive of them. Once this trust and openness has been built up, the recipient will likely be more open to and appreciative of constructive feedback.&lt;/p&gt;
&lt;h3 id=&quot;positive-feedback-can-be-shared-via-a-quick-message-but-constructive-should-often-be-face-to-face&quot;&gt;Positive feedback can be shared via a quick message, but constructive should often be face to face&lt;/h3&gt;
&lt;p&gt;Many types of positive feedback can be shared over a quick message or email as there might not be much back and forth after the fact. It can be quite easy to make a message sound positive to the recipient. When it comes to constructive feedback, its easy for a recipient to read a message and misjudge the tone, making the feedback sound harsh. Having a real time face to face conversation provides much higher quality experience where the recipient’s emotions and reactions can be picked up on in realtime. Having a face to face also shows that you’re invested in their success as this takes more effort than composing a simple message or email. Another benefit is the proceeding conversation the recipient and yourself may have to dig deeper into the feedback, how they see things, etc. is faster than over messaging or email.&lt;/p&gt;
&lt;h3 id=&quot;timing&quot;&gt;Timing&lt;/h3&gt;
&lt;p&gt;Providing feedback promptly is essential. Ideally, give feedback on the same day or as soon as possible after the incident. This ensures that the feedback is fresh in both your mind and the recipient’s, maximizing its effect. Delayed feedback may not have the same effect and can lead to misconceptions or forgotten details.&lt;/p&gt;
&lt;h3 id=&quot;aim-to-chat-now-or-clearly-schedule-a-time-to-share-feedback&quot;&gt;Aim to chat now or clearly schedule a time to share feedback&lt;/h3&gt;
&lt;p&gt;Blindsiding someone with feedback can be a surprise, especially if a meeting invite shows up out of nowhere with no details. Give the recipient a heads up if they would like to receive some feedback. Mentioning that it’s positive or constructive feedback meant to help them can help lower their defences and be open to the feedback.&lt;/p&gt;
&lt;h3 id=&quot;not-everyone-will-be-open-to-feedback&quot;&gt;Not everyone will be open to feedback&lt;/h3&gt;
&lt;p&gt;It can happen that someone just isn’t comfortable with receiving feedback, or that right now isn’t a great moment. Who knows what mindset the recipient is currently in. This can happen, and it’s of no use to push feedback on to someone who doesn’t want to receive it. This would lead to a loss of trust, amongst other negative outcomes. Instead, move on without providing the feedback. There might be another occasion in the future. Consider the previous point on providing unsolicited positive feedback. It may help open up the recipient to hearing constructive feedback.&lt;/p&gt;
&lt;h3 id=&quot;exclusively-talk-about-the-feedback&quot;&gt;Exclusively talk about the feedback&lt;/h3&gt;
&lt;p&gt;Prefer to grab some time to talk with the recipient outside of their ordinary schedule with you, and have the entire conversation only focus on the feedback and any potential questions or organic conversations that focus on the feedback. This makes it very intentional that the time should focus on the feedback and personal growth. It defeats the point if the feedback is promptly given, then the conversation pivots away to a different subject like it never happened. Time should be given for the recipient to reflect on it, ask any questions, and for yourself to make any suggestions, if necessary.&lt;/p&gt;
&lt;h3 id=&quot;dont-assume-other-peoples-observations&quot;&gt;Don’t assume other people’s observations&lt;/h3&gt;
&lt;p&gt;Aim to provide feedback only on your own observations. Relying on another’s observations, especially if not present at the time can make it harder to form strong points for the SBI model. Sometimes this best practice should be broken if other’s have shared positive or critical feedback with you that is beneficial enough to share. As a manager, this is one of the superpowers: hearing how one team member is doing from other team members, and sharing that feedback to help their growth.&lt;/p&gt;
&lt;h3 id=&quot;providing-feedback-wont-always-have-the-desired-effect&quot;&gt;Providing feedback won’t always have the desired effect&lt;/h3&gt;
&lt;p&gt;Just sharing feedback with an individual won’t always change their behaviours going forward. Don’t let it weigh heavy on you or think badly of the recipient if you feel that it looks like previous feedback didn’t have any effect at all. It takes serious effort and willpower for someone to change their own behaviour. If witnessing someone completely disregard previous feedback, use this as another opportunity to provide feedback, while emphasizing that this was previously brought up. Provide benefit of the doubt as people can make mistakes, and it can take multiple times to get into the habit. As a manager, this is a powerful way to grow members of your team, but use it wisely.&lt;/p&gt;
&lt;h3 id=&quot;if-their-attitude-changes&quot;&gt;If their attitude changes&lt;/h3&gt;
&lt;p&gt;People new to receiving critical feedback may have the feedback weigh heavily on themselves immediately or some time after. Criticism can be taken to heart. This can look like a stark negative change in their attitude. If this happens, help by empathizing with them that this is just a little bump on the road, and that the feedback is focused on the behaviour (which can be improved upon) and not a criticism of who the person is. More plainly: the feedback is aimed to improve their behaviours, not change who they are. Over time, receiving critical feedback can become easier and easier for the recipient.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Well done on making your way through the 69 mentions of the word &lt;em&gt;feedback&lt;/em&gt; in this post! In summary, the SBI model provides a valuable framework for giving feedback that promotes professional growth and continuous improvement. By following the structure of Situation, Behaviour, and Impact, you can deliver specific and actionable feedback that helps your reports understand the context, the observed behaviour, and its impact. Remember to always provide feedback as soon as possible and take the time to prepare yourself before engaging in feedback discussions. By using the SBI model and incorporating these personal takeaways, you can foster a culture of growth and development within your team.&lt;/p&gt;</content:encoded></item><item><title>Twenty Eight</title><link>https://jonsimpson.ca/twenty-eight/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-eight/</guid><description>Twenty Eight</description><pubDate>Sat, 01 Oct 2022 23:47:00 GMT</pubDate><content:encoded>&lt;p&gt;My 28th year has been a great one. This year specifically, travel has picked up a lot, house renovations have greatly progressed, and lastly a few achievements I’m quite proud of.&lt;/p&gt;
&lt;h2 id=&quot;travel&quot;&gt;Travel&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/04/IMG_20220123_181834_601.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Scuba diving the west coast of Costa Rica&lt;/figcaption&gt;Travel-wise there’s been several trips for both work and pleasure. In January, I got to visit Costa Rica – a perfect time to visit. I spent most of my time in Tamarindo, one of the more touristy towns on the west side of the country – the locals have nicknamed it “Tamagringo”. Some of the highlights were spending time on the beach, going scuba diving off the Catalina islands, and enjoying such a wide variety of food with great company.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;In June, I had the opportunity for one of my teams at work to travel to Berlin! Given half the team was located in Ireland, it was time to pay them back by having us Canadians and Americans cross the Atlantic. Plus, this was my first time over in Europe! Some notable highlights of this trip were bonding with my team over German and Turkish food, a beautiful bike tour of the city, tasting a variety of German beers, and ripping around the city on scooters.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/04/PXL_20220820_221408810.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Lake Ontario&lt;/figcaption&gt;This summer was a fun kind of busy. The majority of the month of August was spent visiting my folks cottage, girlfriend’s folks’ places, and a few weddings, all the while mostly working at the same time – this flexibility is pretty freeing. We rented a car and sped off to spend time in places like Gravenhurst, Collingwood, Toronto, and Prince Edward County to enjoy nature, friends, family, wine, beer, and cycling. I’m sure we’ll do something similar again next summer, with the added benefit of using my own car.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;In October, I combined a work trip to New York with a few extra days of pleasure. This trip was spent hitting up more of the niche places across the city, such as doing a tour around Brooklyn breweries, experiencing more of the restaurants and cocktail bars, and seeing one of my all-time favourite electronic artists: Flume.&lt;/p&gt;
&lt;h2 id=&quot;career&quot;&gt;Career&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/04/PXL_20220505_210001164.PORTRAIT.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Cocktail making class at an offsite&lt;/figcaption&gt;As with every year, work has significantly changed in great ways. I received a promotion to Senior Development Manager, which cements my path on being a lead of leads. This year also saw a lot of great accomplishments and learnings such as growing my people leads, preparing one of my teams for reaching “feature complete” on a product, and getting even better at growing development teams into being self sufficient. One big growth opportunity I’m targeting now is to focus on strategic thinking to lead my engineering and product area for tackling our next big goals.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;house&quot;&gt;House&lt;/h2&gt;
&lt;p&gt;At home, I’ve been spending much of my time continuing on with home improvement, specifically a bathroom, bedroom, and kitchen reno – most of which I’ve been doing myself and reaching out to the pros when needed! It’s been fulfilling to go from knowing zero about tiling floors and walls to being pretty competent. Nothing like picking up another skill that pays off. Other proud moments were painting my girlfriend’s new place with the wealth of painting techniques and knowledge – it’s a great feeling to cut edges with ease and roll walls to get perfect, smooth coverage. My bias towards taking the time to do things right definitely showed on this and many other projects.&lt;/p&gt;
&lt;h2 id=&quot;cycling&quot;&gt;Cycling&lt;/h2&gt;
&lt;p&gt;Some of the best cycling adventures this year have been back up at the cottage in Gravenhurst, a botched go up the escarpment in Collingwood, through the wine region of Prince Edward County, and a loop around Amherst Island. Don’t tell the rental car company, but I strapped a $20 secondhand bike rack to the back of this beauty of a rental car to aid with a bunch of these adventures. A very entertaining purchase this year was a bike computer and power meter pedals. This exposed a ton of data I’ve dreamed about having for my outdoor cycling. Now I’m able to more consistently measure my actual wattage and fitness. This purchase made early on in the summer brought a lot of great and nerdy data for the rest of the year.&lt;/p&gt;
&lt;p&gt;As always, Gravenhurst was classic – smooth, scenic routes through cottage county. On one occasion I had one of my best buddies up to the cottage with his bike as well. We rode into Gravenhurst and hit up the excellent Clear Lake Brewing Co. We also both got over our fear of riding without holding onto the handlebars 😂&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/04/PXL_20220521_213309411.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Prince Edward County’s wine country roads make for picturesque cycling&lt;/figcaption&gt;When in Prince Edward County for a May 24 long weekend wine tasting trip with friends, we rented bikes from a local place in Hillier. What ensued for the rest of that day was beautiful bike rides from one winery to another, passing by vineyards and farmland. Over 15 km (if that) we hit up several wineries and one brewery for tastings. I’m not the biggest fan of the reds and whites that this region has to offer, but the last place on our ride, Traynor Family Vineyard, had the best tasting wines. That could have been the tasty pet-nat style, or just us all being blasted – probably both. To finish off that great last tasting, we had the best sunset ride back to our Airbnb. Shoutout to this bike shop for their great service!&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Leading up to visiting Collingwood for a week, I knew there was a sizeable community of cyclists of all capabilities and had to scope out what great routes there were. On my first day grabbing coffee in town I started a conversation with a local cyclist and they very helpfully recommended some great routes and suggested avoiding some others. Collingwood has the Georgian Trail, an old railway bed turned gravel path running from its west side to Meaford – 34 kms in total. This was a great way to get some great views of Georgian Bay and stop by Blue Mountain for a coffee. One of the big trips I planned was taking many of the scenic backroads up the Niagara escarpment. This meant epic hills, great views, etc. Probably an hour in and most of the way up the 200m climb, I got a flat. A couple inner tubes later I figured out my tire had been the problem, and I called in for a rescue. I still need to go back and conquer the 70km loop I mapped out.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2023/04/PXL_20220813_141732366-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Amherst Island’s many fields and wind turbines&lt;/figcaption&gt;Amherst Island is a small island east of Prince Edward County. I was lucky enough to be invited out for a weekend by my girlfriend’s mom and her boyfriend who has a place on the island. One of the mornings was spent doing a 2 hour loop of the roads that mostly run the perimeter of the island. Rough gravel was a bit of a pain for my slick road bike tires, but pumping them up even more handled it without a problem. This ride brought beautiful scenery of Lake Ontario and sprawling farmland vistas.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;whats-next&quot;&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;Getting back into reading, and writing for that matter, would be a great throwback! My blog is definitely lacking posts, though I’m still amassing ideas for content, specifically around management.&lt;/p&gt;
&lt;p&gt;I’m also looking forward to concluding most of the renovations that have been taking up most of my time to put towards more consistent cycling. I’d love to get into longer distances and more elevation during rides. Maybe that means doing a few loops in Gatineau Park now that it’s becoming quite a routine ride, or combining it with a couple dozen kms along some backroads.&lt;/p&gt;
&lt;p&gt;As always, I’m looking forward to more trips. As I write this, I’m on my way to spend 10 days on the beautiful island of Kauai in Hawaii.&lt;/p&gt;</content:encoded></item><item><title>Twenty seven</title><link>https://jonsimpson.ca/twenty-seven/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-seven/</guid><description>Twenty seven</description><pubDate>Fri, 01 Oct 2021 19:30:00 GMT</pubDate><content:encoded>&lt;p&gt;October 1st just passed yesterday. Another year in the pandemic, though I tried to make the best of it! Here’s the 27th edition of of my yearly reflections on what I’ve been up to, what I’ve achieved, and where I’ve grown. You can find previous years reflections since I’ve been &lt;a href=&quot;https://jonsimpson.ca/twenty/&quot;&gt;Twenty&lt;/a&gt;, &lt;a href=&quot;https://jonsimpson.ca/twenty-one/&quot;&gt;Twenty-one!&lt;/a&gt;, &lt;a href=&quot;https://jonsimpson.ca/twenty-two/&quot;&gt;Twenty-Two&lt;/a&gt;, &lt;a href=&quot;https://jonsimpson.ca/twenty-three/&quot;&gt;Twenty-Three!&lt;/a&gt;, &lt;a href=&quot;https://jonsimpson.ca/twenty-four/&quot;&gt;Twenty-four!&lt;/a&gt;, &lt;a href=&quot;https://jonsimpson.ca/twenty-five/&quot;&gt;Twenty Five&lt;/a&gt;, and &lt;a href=&quot;https://jonsimpson.ca/twenty-six/&quot;&gt;Twenty Six&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I made the biggest purchase in my life to buy a duplex here in Ottawa’s Centretown neighbourhood. I’ve also become a lead of leads in my engineering organization. I’ve even had a few big accomplishments with my road cycling hobby thanks to a few friends.&lt;/p&gt;
&lt;p&gt;One of my friends asked me about the high and lows of the year. After a bit of thinking, buying the house was definitely the height of the year. The low likely was not spending as much time as I usually would with family and friends either up at the cottage or travelling around. Let’s get in to a few of the highlights!&lt;/p&gt;
&lt;h2 id=&quot;house&quot;&gt;House&lt;/h2&gt;
&lt;p&gt;I bought a house, and moved in on the 5th of July! The landlord of the place I was renting previously was a seasoned real-estate agent. He shared some of his marketing materials with me sometime last year. With lots of people buying places in the country instead of the city to gain more space, the tradeoff wasn’t particularly worth it to me. Over a year into the pandemic and I approached my landlord to ask him if he’ll be my agent. He agreed and a month or two later of not many listings going up due to the lockdown, one place in Centretown meeting my criteria eventually showed up. I viewed it, ended up falling in love with it, and put in an offer for the place. As luck had it, my offer was accepted!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/10/PXL_20211004_023956019.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The dining room after some fresh paint and furniture&lt;/figcaption&gt;Over the past few months, most of my free time has been going towards making cosmetic improvements to the place, running ethernet cables through the walls, deep cleaning everything, buying new furniture, and starting the never-ending decorating journey. I’m quite glad that all of the DIY skills and confidence I’ve gained helping my family with their projects when growing up is helping me greatly now. Having extra time on my hands certainly helps as well!&lt;p&gt;&lt;/p&gt;
&lt;p&gt;I’m quite thankful for having an interior designer friend who’s been very handy when suggesting furniture pieces and paint colours. I definitely wouldn’t have as cool looking of a place without it! Another great friend has also lent me a number of tools to help with the handful of jobs I’ve been doing around the place.&lt;/p&gt;
&lt;p&gt;I’m most excited about entertaining and enjoying the house when I’m not in DIY mode all the time. This Christmas should be a blast hosting my family, and there’s likely a few house parties I want to throw as things open up more. As long as I’m enjoying the space with others, I’ll be fulfilled.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/10/PXL_20210814_191049017.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The top floor deck with obligatory string lights&lt;/figcaption&gt;## Career&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Earlier this year I received a promotion to become a manager of managers! Yes, I’m in full Office Space-esque “What do you do here?” territory. Jokes aside, this has been incredibly exciting as I am now accountable for the people and product across a handful of teams. Very recently many of my previous responsibilities have been handed over to two fantastic people leads on my team. The conversations I’m having with them, some senior developers, and high growth individuals are very focused on helping them grow their careers and impact that they bring to the team, which has been quite fulfilling.&lt;/p&gt;
&lt;p&gt;On the product side of things, a recent trend has evolved from “build extendable features for the long term” to “build self-service, or low-code features for the long term”. This is a neat observation and paradigm shift, which reflects on the development teams being more mature, and the need to build the right knobs and switches into the system to more easily enable the business to change how they work.&lt;/p&gt;
&lt;p&gt;For a number of months, I was short a people lead for one of the teams and took on the extra load of performing all of the people management work until we hired on a permanent replacement. I had something like 17 half-hour weekly one on ones with everyone who normally reports to me, and every developer from the team that didn’t have a people lead. This took a crazy amount of time out of my schedule, but I loved the chaos and leaned into it. This was a great test of my time management, prioritization, and delegation skills. Since I wasn’t able to be involved in each team’s day to day, I heavily leaned on the seniors of each team to take ownership over the technical and product decisions. This worked out miraculously well, and was an amazing growth opportunity for these individuals to take on more ownership and make more decisions. Each team being in a mature enough state to not require my day to day involvement was key for me to focus on the more important people management side of things.&lt;/p&gt;
&lt;p&gt;Growing of these teams also took precedence, as it periodically does every year. We grew the teams by several developers, hiring folks from Ireland, around Canada, and even the US. I still have to remind myself that we truly hire great people to work with, both professionally and socially.&lt;/p&gt;
&lt;h2 id=&quot;cycling&quot;&gt;Cycling&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/10/PXL_20210820_173317838_2.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Happily cycling from the cottage to Bala in my Shopify-branded kit&lt;/figcaption&gt;Where to begin. One of the biggest forces that has helped push myself out of my comfort zone and see just how much cycling I can endure was thanks to a great amount of healthy competition with some friends. When the weather got cold, indoor cycling started, and a number of cyclists from work came together to do some virtual group rides. Three of us wanted to go further and ended up cycling multiple times a week. Over a number of months of seeing our cycling strength and endurance increase, we signed up for some very tough challenges in our virtual cycling app of choice, Zwift. Those challenges were:&lt;p&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;a href=&quot;https://www.strava.com/activities/4941895061/overview&quot;&gt;PRL Full route&lt;/a&gt;, consisting of 175 km, 2281 m of climbing. &lt;em&gt;&lt;strong&gt;It took 7 hours!!!&lt;/strong&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I have to pause here, since going into this, we knew that this would be pushing our limits, and then some. Our times continuously riding were about 4 hours max. My cycling buddies and I were expecting this to be a 6-7 hour ride for us. Cycling for this long becomes quite the mind game along with the expected fatigue. As I shared above, my cycling buddies and I were able to finish it! I was seriously questioning why I enjoyed this whole cycling thing for a few days after that. To get over the pain and suffering of riding the same hilly route 11 times during the challenge, I forced myself a few days later to go ride it once more to get over my newfound loathing of it. It worked. I got over it. The best part about this challenge was that every other challenge paled in comparison since none were as challenging as this!&lt;/p&gt;
&lt;p&gt;Otherwise here’s a few other notable achievements from doing all of the indoor cycling on Zwift:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;1000 m climb of &lt;a href=&quot;https://www.strava.com/activities/4999631721/segments/2809605205714398712&quot;&gt;Alpe du Zwift in less than an hour&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.strava.com/activities/5144017689/&quot;&gt;Four Horsemen route&lt;/a&gt; – 100 km, 2200 m in 4:15&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.strava.com/activities/5302450935&quot;&gt;Mega Pretzel route&lt;/a&gt; – 112 km, 1600 m, in 4:13&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.strava.com/activities/5027065021&quot;&gt;The PRL Half route&lt;/a&gt; – a cakewalk after doing the PRL Full&lt;/li&gt;
&lt;li&gt;And that time my &lt;a href=&quot;https://www.strava.com/activities/4845227395&quot;&gt;FTP jumped from 180 to 221&lt;/a&gt; as I chased a friend up Alpe du Zwift&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From all of the cycling, the amount of power I could exert increased from 2020 to 2021 significantly! Some quick number crunching shows a 40-60% improvement, which is mind blowing!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/10/2020-to-2021-power-curve.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;2020 (lighter line) to 2021 (darker line) watts/kg power curve. &lt;/figcaption&gt;When the weather warmed up, there was a number of great adventures and achievements that were had:&lt;p&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Going to Gatineau Park a few dozen times and feeling like I’m among the fast cyclists&lt;/li&gt;
&lt;li&gt;Cycling with &lt;a href=&quot;https://www.strava.com/activities/5674778824&quot;&gt;my Zwift buddies in Gatineau Park&lt;/a&gt;!&lt;/li&gt;
&lt;li&gt;Taking a trip to the &lt;a href=&quot;https://www.strava.com/activities/5390378980&quot;&gt;Forks of the Credit&lt;/a&gt; area near my hometown to crush those hills&lt;/li&gt;
&lt;li&gt;Making my way &lt;a href=&quot;https://www.strava.com/activities/5638699307&quot;&gt;from Ottawa to Stittsville&lt;/a&gt;, and then turning it into a 100k ride&lt;/li&gt;
&lt;li&gt;Biking from the &lt;a href=&quot;https://www.strava.com/activities/5824824965&quot;&gt;cottage to the nearby town of Bala&lt;/a&gt; on beautiful cottage country backroads&lt;/li&gt;
&lt;li&gt;Lastly, a fun ride and talk on the Caledon Trailway with my aunts&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/10/PXL_20210531_160800296.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;One of the funnest hills to climb on the Forks of the Credit ride, this switchback was beautiful to take in&lt;/figcaption&gt;I can’t wait to see what I’ll get up to next year cycling-wise!&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;whats-next&quot;&gt;What’s next?&lt;/h2&gt;
&lt;p&gt;Well, there’s probably a decent amount of travel I’m looking forward to over the next year. Some already figured out such as a handful of business trips to hang with the teams, and an unknown amount of personal ones with friends and family that I’m most excited about.&lt;/p&gt;
&lt;p&gt;Once the renovations and DIY around the house have settled down, figuring out what I’ll do with the other unit is on the list. Having a second source of income can only help set myself up more for the future.&lt;/p&gt;
&lt;p&gt;Hopefully I’ll do some even bigger cycling trips, and get around to that bike camping I wanted to get around to this past summer. Buying a bike computer and power meter would help on these adventures and regular training too.&lt;/p&gt;
&lt;p&gt;2022 is looking bright!&lt;/p&gt;</content:encoded></item><item><title>Binge-worthy podcasts discovered during the pandemic</title><link>https://jonsimpson.ca/binge-worthy-podcasts-discovered-during-the-pandemic/</link><guid isPermaLink="true">https://jonsimpson.ca/binge-worthy-podcasts-discovered-during-the-pandemic/</guid><description>Binge-worthy podcasts discovered during the pandemic</description><pubDate>Mon, 15 Mar 2021 01:58:59 GMT</pubDate><content:encoded>&lt;p&gt;Throughout 2020 I have listened to hundreds if not now thousands of hours worth of podcasts. Have I learned anything useful? Not really. Did it help keep me entertained through the pandemic? Hell yes.&lt;/p&gt;
&lt;p&gt;This blog post complements another post I wrote a number of years ago which collected &lt;a href=&quot;https://jonsimpson.ca/my-top-tech-software-and-comedy-podcast-list/&quot;&gt;my favourite podcasts over technology, entertainment, and software development&lt;/a&gt;. This blog post focuses more on the different types of entertainment that are great for binging through while on a long road trip, while doing some chores, or desiring an escape from the day to day. Here are my reviews for a number of the most noteworthy podcasts that have kept me busy over the last year.&lt;/p&gt;
&lt;h2 id=&quot;not-another-dd-podcast&quot;&gt;&lt;a href=&quot;https://www.naddpod.com/&quot;&gt;Not Another D&amp;#x26;D Podcast&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/naddpod.jpeg&quot; alt=&quot;&quot;&gt;Not Another D&amp;#x26;D Podcast paints an incredible adventure through its hundreds of hours of episodes. Dungeon Master Brian Murphy is an expert at storytelling while balancing the randomness of the game of Dungeons and Dragons. His ability to have such a wide array of voices for characters in the story complements the improv of players Jake Hurwitz, Emily Murphy, and Caldwell Tanner. The allure of the podcast is building up an adventure that the listener is highly invested in while joking around enough to not alienate the story.&lt;/p&gt;
&lt;p&gt;I haven’t ever played a game of Dungeons and Dragons before, but got introduced to this podcast and the idea of D&amp;#x26;D through listening to a &lt;a href=&quot;https://art19.com/shows/not-another-d-and-d-podcast/episodes/49263320-401b-41c9-bc09-3ba751b6499f&quot;&gt;hilarious holiday special episode&lt;/a&gt; that featured Amir Blumenfeld. I was hooked on this podcast ever since, and have gone so far to subscribe to this podcast’s Patreon primarily for the again-hilarious post-episode commentary.&lt;/p&gt;
&lt;h2 id=&quot;the-adventure-zone&quot;&gt;&lt;a href=&quot;https://maximumfun.org/podcasts/adventure-zone/&quot;&gt;The Adventure Zone&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/the-adventure-zone.jpg&quot; alt=&quot;&quot;&gt;The McElroy family puts this excellent role-playing games show together. Brothers Justin, Travis, Griffin, and father Clint partake in a handful of sagas across Dungeon and Dragons, and other role-playing games. I particularly find their D&amp;#x26;D seasons more entertaining than the rest. Their gameplay takes a bit more of an absurdist comedy approach compared to Not Another D&amp;#x26;D Podcast, but the storytelling and character building is still maintained.&lt;/p&gt;
&lt;p&gt;Some of the other off-season games they have played haven’t been as interesting to me. The game of D&amp;#x26;D brings out more excitement and variety to the storytelling, keeping me on the edge of my seat, compared to the other games which involved a lot less game mechanics and leant more on the story being told.&lt;/p&gt;
&lt;h2 id=&quot;black-box-down&quot;&gt;&lt;a href=&quot;https://roosterteeth.com/series/black-box-down&quot;&gt;Black Box Down&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/black-box-down.jpg&quot; alt=&quot;&quot;&gt;Gus and Chris walking through the unbelievable chain of events that go into many air flight incidents leaves you with a new appreciation for the safety of the flying industry. Each episode follows the timeline of events until disaster or rescue and then dives deep into the results of multi-year investigations that most of these flight disasters go through.&lt;/p&gt;
&lt;p&gt;Gus is the primarily the one driving the show with storytelling and introducing new information, while Chris adds questions and commentary that many of us layman listeners would wonder about. The show keeps the listener tuned in solely based on what surprising or interesting new information will unfold for the current real-life story.&lt;/p&gt;
&lt;p&gt;Some of my favourite episodes are the unbelievable fight between &lt;a href=&quot;https://www.stitcher.com/show/black-box-down/episode/a-fight-in-flight-69632729&quot;&gt;a hijacker and the flight crew aboard a FedEx flight&lt;/a&gt;, an &lt;a href=&quot;https://www.stitcher.com/show/black-box-down/episode/interview-with-crash-survivor-74201189&quot;&gt;interview with a plane crash survivor&lt;/a&gt; who believes they benefitted from the incident, and &lt;a href=&quot;https://www.stitcher.com/show/black-box-down/episode/missing-malaysian-flight-71362622&quot;&gt;the recent Malaysian Airlines flight 370&lt;/a&gt; which disappeared over the Indian Ocean.&lt;/p&gt;
&lt;h2 id=&quot;triforce&quot;&gt;&lt;a href=&quot;https://www.youtube.com/channel/UCgXiTWrFg05fTPfw1YLb5Ug&quot;&gt;Triforce!&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/triforce.jpg&quot; alt=&quot;&quot;&gt;This isn’t for everyone. Some poop jokes, a hint of political incorrectness, and banter about normal life from these three guys is surprisingly entertaining and definitely NSFW. Their day jobs are of streamers – those who play videogames for others to watch online. They convene weekly to catch up and make each other laugh over the mundane experiences they have, whats latest in the news, or the games they play.&lt;/p&gt;
&lt;p&gt;To add to the uniqueness, many of the early episodes feature Pyrion’s original short stories of Bodega, a gunslinger in a futuristic galaxy. &lt;em&gt;Scoffee&lt;/em&gt;, short for space coffee, is this universe’s version of our own coffee. This, and a handful of other original words add to the fictional world. After enough interest, &lt;a href=&quot;https://www.goodreads.com/book/show/51338440-bodega&quot;&gt;Pyrion wrote a Bodega novel&lt;/a&gt; to connect together many of the storylines originally read during the podcast. I haven’t read it yet, but should eventually pick up a copy for myself.&lt;/p&gt;
&lt;p&gt;Whenever I’m in the mood for a good laugh I know I can revisit a couple favourited episodes, or choose a random one if I’m feeling lucky. Some of those favourites are &lt;a href=&quot;https://podcasts.apple.com/podcast/triforce-25-good-days-gone-by/id304557271?i=1000377169861&quot;&gt;the absurdity of being at kids parties&lt;/a&gt; (#25), &lt;a href=&quot;https://podcasts.apple.com/podcast/triforce-40-backdoor-neighbour/id304557271?i=1000384540765&quot;&gt;Pyrion and his sketchy neighbour&lt;/a&gt; (#40), and &lt;a href=&quot;https://podcasts.apple.com/podcast/triforce-89-thats-bonerino/id304557271?i=1000429323233&quot;&gt;imagining a new and very NSFW gameshow&lt;/a&gt; (#89).&lt;/p&gt;
&lt;h2 id=&quot;notable-mentions-from-this-year&quot;&gt;Notable mentions from this year&lt;/h2&gt;
&lt;h3 id=&quot;deep-cover&quot;&gt;&lt;a href=&quot;https://www.pushkin.fm/show/deep-cover-the-drug-wars/&quot;&gt;Deep Cover&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/deep_cover.jpg&quot; alt=&quot;&quot;&gt;An FBI agent retelling their experiences of going undercover and taking down drug lords? ’nuff said.&lt;/p&gt;
&lt;h3 id=&quot;the-orange-tree&quot;&gt;&lt;a href=&quot;https://thedragaudio.com/show/the-orange-tree/&quot;&gt;The Orange Tree&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/orange-tree.jpg&quot; alt=&quot;&quot;&gt;Not your typical murder mystery, &lt;em&gt;The Orange Tree&lt;/em&gt; chronicles the brutal murder of Jennifer Cave, a student at Univeristy of Texas at Austin. The series is hosted by Haley Butler and Tinu Thomas who both attended the same university where the murder happened a decade earlier. They kept hearing about the infamous Orange Tree apartment complex from friends and the murder being in the news, which ultimately led the two to produce this show.&lt;/p&gt;
&lt;p&gt;The format of the series consist of multiple interviews, retellings of news clips, court transcripts, and questioning to tell the story. Each episode does a great job at keeping you wanting to listen to the next episode based on a big reveal in the last few minutes of each episode.&lt;/p&gt;
&lt;h3 id=&quot;brainwashed&quot;&gt;&lt;a href=&quot;https://www.cbc.ca/listen/cbc-podcasts/440-brainwashed&quot;&gt;Brainwashed&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/brainwashed.jpg&quot; alt=&quot;&quot;&gt;### &lt;a href=&quot;http://revisionisthistory.com/&quot;&gt;Revisionist History&lt;/a&gt; and &lt;a href=&quot;https://www.dancarlin.com/hardcore-history-series/&quot;&gt;Hardcore History&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2021/01/revisionist-history.jpg&quot; alt=&quot;&quot;&gt;&lt;img src=&quot;/static/images/2021/01/hardcore-history.jpg&quot; alt=&quot;&quot;&gt;I’m no history enthusiast, but from listening to the &lt;a href=&quot;https://www.dancarlin.com/product/hardcore-history-wrath-of-the-khans-series/&quot;&gt;Genghis Kahn series&lt;/a&gt; from the Hardcore History podcast helped change my view that history can be intriguing if told the right way. The same goes for a few episodes from Revisionist History’s telling of &lt;a href=&quot;http://revisionisthistory.com/seasons?selected=season-5&quot;&gt;Curtis LeMay&lt;/a&gt; who was a prominent American Air Force General during World War II. I still have a vast amount of episodes to listen to from these two podcasts, but they will likely keep my interest for many hours.&lt;/p&gt;
&lt;h2 id=&quot;thats-all-for-now&quot;&gt;That’s all for now&lt;/h2&gt;
&lt;p&gt;In the end, I wish there were more hours in the day to listen to more podcasts. Thankfully when I need a break from one, there’s another great podcast to start or pick back up.&lt;/p&gt;</content:encoded></item><item><title>ZFS snapshots in action</title><link>https://jonsimpson.ca/zfs-snapshots-in-action/</link><guid isPermaLink="true">https://jonsimpson.ca/zfs-snapshots-in-action/</guid><description>ZFS snapshots in action</description><pubDate>Sun, 10 Jan 2021 17:52:11 GMT</pubDate><content:encoded>&lt;p&gt;I recently had my laptop running Xubuntu 19.10 reach its end of life for security updates. I needed to upgrade to a newer version of Xubuntu to continue receiving the important updates. Luckily when I originally put Xubuntu 19.10 on this laptop, I installed the OS using ZFS as the filesystem – &lt;a href=&quot;https://arstechnica.com/information-technology/2019/10/a-detailed-look-at-ubuntus-new-experimental-zfs-installer/&quot;&gt;a feature new to the Ubuntu installer at that time&lt;/a&gt;. Thankfully ZFS proved itself as a great safety net for when the upgrade failed midway through the Xubuntu upgrade to 20.04 (the laptop abruptly turned off). But first, some background on ZFS snapshots.&lt;/p&gt;
&lt;h2 id=&quot;zfs-snapshots&quot;&gt;ZFS Snapshots&lt;/h2&gt;
&lt;p&gt;One of the features not discussed in &lt;a href=&quot;https://jonsimpson.ca/zfs-takeaways/&quot;&gt;my previous article on ZFS&lt;/a&gt; was the powerful snapshotting features available to use. For any ZFS dataset (synonymous with a filesystem), snapshots can be created to mark a moment in time for all data stored within the dataset. These snapshots can be created or removed at any time, and will take up more storage space over time as files are added and removed after a snapshot has been taken. With a snapshot created, at any point in the future it’s possible to rollback to this snapshot, or go and read the data within it. Rolling back to a snapshot effectively erases anything that happened after that snapshot was taken. There are more advanced uses for snapshots that can be discovered in &lt;a href=&quot;https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/&quot;&gt;this great resource&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Right after I installed 19.10 a year ago, I created a snapshot to mark a clean install of Xubuntu in case I messed something up and needed to revert to a fresh new install. I haven’t yet needed to use this at all. Next up I’ll walk through my experience upgrading to 20.04 and using ZFS snapshots.&lt;/p&gt;
&lt;h2 id=&quot;taking-zfs-snapshots&quot;&gt;Taking ZFS Snapshots&lt;/h2&gt;
&lt;p&gt;Xubuntu 19.10 recently stopped receiving security updates, and therefore I needed to upgrade. The 20.04 version is Ubuntu’s long-term support (LTS) release, which provides a number of years of support and security updates – far greater than the non-LTS releases such as 19.10. Going into the upgrade I made sure to make a snapshot of all of the different datasets before performing the upgrade. From the Ars Technica article referenced eariler, the following command takes a recursive snapshot of all datasets that are part of the &lt;code&gt;rpool&lt;/code&gt;:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;sh&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zfs&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; snapshot&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -r&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; rpool@2020-upgrade&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;No output means that the command was successful. The following command then shows all of the different datasets that were snapshotted in the pool named &lt;code&gt;rpool&lt;/code&gt;. If you’re following along, this may look a bit different for you. Ubuntu’s installer creates many different datasets for different directories, and two pools, one named &lt;code&gt;rpool&lt;/code&gt;, and the other named &lt;code&gt;bpool&lt;/code&gt; (not important for this article).&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;sh&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zfs&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; list&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -rt&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; snap&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; rpool&lt;/span&gt;&lt;span style=&quot;color:#F97583&quot;&gt; |&lt;/span&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; grep&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 2020-upgrade&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                                                0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                                           0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                          1.98G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     6.49G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/srv@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                         0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/usr@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                         0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/usr/local@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                  72K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      112K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                         0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/games@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                   0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/lib@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                  35.8M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     1.32G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/lib/AccountServices@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/lib/NetworkManager@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;    156K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      284K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/lib/apt@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;              6.41M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     88.6M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/lib/dpkg@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;             18.8M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     40.8M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/log@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                  23.0M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     1011M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/mail@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                    0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/snap@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                    8K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      160K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/spool@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                  72K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      112K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/ROOT/ubuntu_191r26/var/www@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                     0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/USERDATA@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                                       0B&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;       96K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/USERDATA/jon_ip6jrn@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                          396M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     17.8G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;rpool/USERDATA/root_ip6jrn@2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;                         404K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      -&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     1.87M&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  -&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that a snapshot was created, I could make any change to the system and be able to rollback to this snapshot, undoing any changes that were made after the snapshot.&lt;/p&gt;
&lt;h2 id=&quot;upgrading-to-2004&quot;&gt;Upgrading to 20.04&lt;/h2&gt;
&lt;p&gt;To perform the OS upgrade to 20.04, a &lt;code&gt;sudo do-release-upgrade&lt;/code&gt; was entered, initiating the upgrade. Things were progressing well until the laptop’s battery unexpectedly ran out. Plugging in the power and starting the laptop back up, the login screen wasn’t showing up. Great. Thankfully there’s the &lt;a href=&quot;https://askubuntu.com/a/481915/435197&quot;&gt;little-known virtual tty consoles&lt;/a&gt; available a keyboard combo away for cases where you need a terminal but aren’t able to use the graphical window manager.&lt;/p&gt;
&lt;p&gt;Now that I have a terminal on the laptop, poking around has shown that the upgrade was definitely interrupted midway through. Only a handful of packages were installed and many more needed to be installed and configured.&lt;/p&gt;
&lt;p&gt;Instead of going on and trying to manually fix the failed upgrade, why not roll back to the ZFS snapshot taken just before the upgrade and restart the upgrade from this fresh state? This is what is shown next. With the open terminal, executing this command rolled back the system to the state taken at the &lt;code&gt;2020-upgrade&lt;/code&gt; snapshot.&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;sh&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; sudo&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zfs&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; list&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -rt&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; snap&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; rpool&lt;/span&gt;&lt;span style=&quot;color:#F97583&quot;&gt; |&lt;/span&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; grep&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 2020-upgrade&lt;/span&gt;&lt;span style=&quot;color:#F97583&quot;&gt; |&lt;/span&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; awk&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; &apos;{print $1}&apos;&lt;/span&gt;&lt;span style=&quot;color:#F97583&quot;&gt; |&lt;/span&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; xargs&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -I%&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; sudo&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zfs&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; rollback&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -r&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; %&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; reboot&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; now&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Performing a reboot right after executing this series of commands makes sure that the system is properly initialized from the &lt;code&gt;2020-upgrade&lt;/code&gt; snapshot’s state.&lt;/p&gt;
&lt;p&gt;To get a better idea of what the above series of commands does, refer to &lt;a href=&quot;https://arstechnica.com/information-technology/2019/10/a-detailed-look-at-ubuntus-new-experimental-zfs-installer/&quot;&gt;this Ars Technica article&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;and-it-worked&quot;&gt;And it Worked&lt;/h2&gt;
&lt;p&gt;After the reboot, the system came back up looking like it was exactly where the snapshot had been taken. I was able to proceed again with the upgrade to 20.04, this time leaving the laptop plugged in.&lt;/p&gt;
&lt;p&gt;The safety net of ZFS snapshots proved itself during this experience. It can feel scary knowing that your data is on the line if things go wrong. Having a strong understanding of how ZFS and related systems work helped me get through this without making any unrecoverable mistakes. If you haven’t read it already, my &lt;a href=&quot;https://jonsimpson.ca/zfs-takeaways/&quot;&gt;previous article on ZFS takeaways&lt;/a&gt; includes many of the references I used to build a strong understanding of ZFS.&lt;/p&gt;</content:encoded></item><item><title>ZFS takeaways</title><link>https://jonsimpson.ca/zfs-takeaways/</link><guid isPermaLink="true">https://jonsimpson.ca/zfs-takeaways/</guid><description>ZFS takeaways</description><pubDate>Mon, 21 Dec 2020 23:44:38 GMT</pubDate><content:encoded>&lt;p&gt;ZFS is quite a feature rich filesystem. If you’re managing a number of hard drives or SSDs, the benefits of using ZFS are numerous. For example, ZFS offers a more powerful software-based RAID than normal hardware-based RAID. Snapshotting is a powerful feature for versioning and replicating data. Data consistency is built in and automatically makes sure all data continues to stay readable over time. The pool of drives can grow or shrink while being transparent to the filesystem above it. These are only a few of the powerful features gained from using ZFS. With this power comes a fair amount of initial overhead learning about how ZFS works. Though it’s worth it if you value flexibility and your data. Here’s a number of resources and tips I found useful as I learned and used ZFS over the past few months.&lt;/p&gt;
&lt;p&gt;For a general overview of ZFS and more of its benefits, see this great &lt;a href=&quot;https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/&quot;&gt;ZFS 101 article on Ars Technica&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;testing-out-zfs-by-using-raw-files&quot;&gt;Testing out ZFS by using raw files&lt;/h2&gt;
&lt;p&gt;Throughout the Ars Technica article above, the author uses files on the filesystem instead of physical devices to test out different pool configurations. This is very handy to build up experience with using the different &lt;code&gt;zpool&lt;/code&gt; and &lt;code&gt;zfs&lt;/code&gt; commands. For example, I used this to get a feel for using the different types of vdevs and using the &lt;code&gt;zpool remove&lt;/code&gt; command. A quick example is as follows:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;sh&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; for&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; n&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; in&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; {1..4}&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;; &lt;/span&gt;&lt;span style=&quot;color:#F97583&quot;&gt;do&lt;/span&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; truncate&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -s&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 1G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;$n&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;.raw&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;; &lt;/span&gt;&lt;span style=&quot;color:#F97583&quot;&gt;done&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; ls&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; -lh&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;*&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;.raw&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;-rw-rw-r--&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 1&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 1.0G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Dec&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 21&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 17:09&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/1.raw&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;-rw-rw-r--&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 1&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 1.0G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Dec&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 21&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 17:09&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/2.raw&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;-rw-rw-r--&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 1&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 1.0G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Dec&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 21&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 17:09&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/3.raw&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;-rw-rw-r--&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 1&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; jon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 1.0G&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Dec&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 21&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 17:09&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/4.raw&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; sudo&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zpool&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; create&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; mirror&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/1.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/2.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; mirror&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/3.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; /tmp/4.raw&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zpool&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; status&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;  pool:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; state:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; ONLINE&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;  scan:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; none&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; requested&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;config:&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	NAME&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;            STATE&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     READ&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; WRITE&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; CKSUM&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;	test&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;            ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	  mirror-0&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	    /tmp/1.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	    /tmp/2.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	  mirror-1&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	    /tmp/3.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	    /tmp/4.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;errors:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; No&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; known&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; data&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; errors&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; sudo&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zpool&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; remove&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; mirror-0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zpool&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; status&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;  pool:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt; state:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; ONLINE&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;  scan:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; none&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; requested&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;remove:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Removal&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; of&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; vdev&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 0&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; copied&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 38.5K&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; in&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 0h0m,&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; completed&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; on&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Mon&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; Dec&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 21&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; 17:39:09&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 2020&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;    96&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; memory&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; used&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; for&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; removed&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; device&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; mappings&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;config:&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	NAME&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;            STATE&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;     READ&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; WRITE&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; CKSUM&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;	test&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;            ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	  mirror-1&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;      ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	    /tmp/3.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;	    /tmp/4.raw&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;  ONLINE&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;       0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;     0&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;errors:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; No&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; known&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; data&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; errors&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; sudo&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; zpool&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; destroy&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; test&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here I use &lt;code&gt;truncate&lt;/code&gt; to create four 1 GB sized empty files, then created a zpool named &lt;code&gt;test&lt;/code&gt; with two mirror vdevs, using those four raw files. Then &lt;code&gt;mirror-0&lt;/code&gt; is removed, moving any blocks over to &lt;code&gt;mirror-1&lt;/code&gt;. The pool is then finally destroyed.&lt;/p&gt;
&lt;h2 id=&quot;really-understanding-vdevs&quot;&gt;Really understanding vdevs&lt;/h2&gt;
&lt;p&gt;Vdevs are a foundational part of using ZFS, and knowing what each vdev type accomplishes, and their strengths and weaknesses helps build confidence in keeping your data safe. This &lt;a href=&quot;https://www.reddit.com/r/zfs/comments/fn5ugg/zfs_topology_faq_whats_a_zpool_whats_a_vdev/&quot;&gt;Reddit post on the ZFS subreddit&lt;/a&gt; goes into detail about many of these considerations. Again, in advance of making changes to a production ZFS pool, dry-running the changes on a test pool can provide more confidence for the changes to be made.&lt;/p&gt;
&lt;h2 id=&quot;adding-and-removing-disks&quot;&gt;Adding and removing disks&lt;/h2&gt;
&lt;p&gt;One of the newer features allows removing only certain types of vdevs from a pool via the &lt;code&gt;zpool remove&lt;/code&gt; command. &lt;a href=&quot;https://www.reddit.com/r/zfs/comments/bzhb02/help_understanding_which_removal_scenarios_are/&quot;&gt;This Reddit post and answer&lt;/a&gt; goes into some of the different potential scenarios. I did some thorough testing with a test pool of raw files before making any changes to my production pool. The &lt;a href=&quot;https://manpages.debian.org/unstable/zfsutils-linux/zpool.8.en.html&quot;&gt;&lt;code&gt;zpool&lt;/code&gt; manpages&lt;/a&gt; mention the following about what can and can’t be removed:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Removes the specified device from the pool. This command supports removing hot spare, cache, log, and both mirrored and non-redundant primary top-level vdevs, including dedup and special vdevs. When the primary pool storage includes a top-level raidz vdev only hot spare, cache, and log devices can be removed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I was amazed at the ability of being able to remove a vdev from a pool and have all data transparently moved over to the rest of the pool with the pool still online. One command moved terabytes of data and verified its integrity before removing the vdev.&lt;/p&gt;
&lt;h2 id=&quot;use-different-dev-references-when-creating-pools&quot;&gt;Use different /dev/ references when creating pools&lt;/h2&gt;
&lt;p&gt;A small, but important tip when it comes to which &lt;code&gt;/dev/&lt;/code&gt; to use when adding devices to a production pool is to stick to using the device symlinks provided by &lt;code&gt;/dev/disk/by-id/&lt;/code&gt; or &lt;code&gt;/dev/disk/by-path/&lt;/code&gt; as they are less likely to change. Referencing drives directly like &lt;code&gt;/dev/sdc&lt;/code&gt; can be more risky as these identifiers can change whenever hardware is added or removed from the system. &lt;a href=&quot;https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#selecting-dev-names-when-creating-a-pool-linux&quot;&gt;The OpenZFS docs&lt;/a&gt; provide a great rundown on why this is recommended.&lt;/p&gt;
&lt;h2 id=&quot;other-helpful-resources&quot;&gt;Other helpful resources&lt;/h2&gt;
&lt;p&gt;These were just a handful of my biggest takeaways of using ZFS over the past couple of months. A number of useful resources I’ve found along the way can be found here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html&quot;&gt;OpenZFS FAQ&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/&quot;&gt;Aaron Toponce’s ZFS guides&lt;/a&gt; (slightly out of date)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://openzfs.org/wiki/System_Administration&quot;&gt;General administration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html&quot;&gt;Performance tuning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://wiki.archlinux.org/index.php/ZFS&quot;&gt;Archlinux wiki on ZFS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://web.archive.org/web/20161028084224/http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_Migration_Strategies&quot;&gt;Whole lot of best practices&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title>Building a homelab - local DNS</title><link>https://jonsimpson.ca/building-a-homelab-local-dns/</link><guid isPermaLink="true">https://jonsimpson.ca/building-a-homelab-local-dns/</guid><description>Building a homelab - local DNS</description><pubDate>Sun, 18 Oct 2020 21:54:03 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This is a second post in a series of my experiences while Building a Homelab. The first post focusing on the history, hardware, and OS can be found &lt;a href=&quot;https://jonsimpson.ca/building-a-homelab-a-walk-through-history-and-investing-in-new-hardware/&quot;&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Having a number of networked devices at home presents some management overhead. You may find yourself asking, &lt;em&gt;what was the IP address of that one laptop?&lt;/em&gt; or just getting plain old tired of looking at IP addresses. One method people often use to manage their network is to assign Domain Name System (DNS) names to their devices. Instead of constantly typing in &lt;code&gt;192.168.1.1&lt;/code&gt; you could instead assign it the domain name &lt;code&gt;router.home&lt;/code&gt;. Entering &lt;code&gt;router.home&lt;/code&gt; into your browser then transparently brings you to the same webpage as &lt;code&gt;192.168.1.1&lt;/code&gt;. This not only works for browsing the internet, services such as SSH, FTP, and other places where an IP address would normally be used can likely use the friendlier domain name instead.&lt;/p&gt;
&lt;p&gt;So how can this be done? It’s actually quite simple given you have an always-on computer on the same network as the rest of your devices, a router with DNS serving capabilities, or even a DNS provider such as Cloudflare. This article will focus on the DIY solution of running a DNS server on an always on computer.&lt;/p&gt;
&lt;p&gt;Before we get to how to set this up, let’s first explain what DNS is and how it works. Feel free to skip over this section if you’re already knowledgeable.&lt;/p&gt;
&lt;h2 id=&quot;what-is-dns&quot;&gt;What is DNS?&lt;/h2&gt;
&lt;p&gt;DNS is a technology used to translate human-friendly domain names to IP addresses. For example, we can ask a DNS server &lt;em&gt;what is the IP address for the domain google.com?&lt;/em&gt; The DNS server would then respond with the IP address for &lt;code&gt;google.com&lt;/code&gt;: &lt;code&gt;172.217.1.174&lt;/code&gt;. DNS is used for almost every request your computer, phone, smart lightbulbs, and more when it communicates with the internet.&lt;/p&gt;
&lt;p&gt;Anyone who runs a website is using DNS whether they know it or not. Usually the basic premise is that each domain name (eg. &lt;code&gt;mysite.com&lt;/code&gt;) will have a DNS record which points to an IP address. The IP address is the actual computer on the internet which traffic for &lt;code&gt;mysite.com&lt;/code&gt; will be sent to.&lt;/p&gt;
&lt;p&gt;An example of DNS being used can be for &lt;code&gt;jonsimpson.ca&lt;/code&gt;. This site is hosted on a server that I pay for at DigitalOcean. That server has an IP address of &lt;code&gt;1.2.3.4&lt;/code&gt; (a fictitious example). I use Cloudflare as the DNS provider for &lt;code&gt;jonsimpson.ca&lt;/code&gt;. Anytime a user’s browser wants to go to &lt;code&gt;jonsimpson.ca&lt;/code&gt;, it uses DNS to figure out that &lt;code&gt;jonsimpson.ca&lt;/code&gt; is located at &lt;code&gt;1.2.3.4&lt;/code&gt;, then the user’s browser opens up a connection with the server at &lt;code&gt;1.2.3.4&lt;/code&gt; to load this site.&lt;/p&gt;
&lt;p&gt;This is quite a simplified definition of DNS as the system is distributed across the world, hierarchical, and involves hundreds of thousands, if not millions, of different entities. &lt;a href=&quot;https://www.cloudflare.com/learning/dns/what-is-dns/&quot;&gt;Cloudflare provides a more detailed explanation&lt;/a&gt; as to how DNS works, and &lt;a href=&quot;https://en.wikipedia.org/wiki/Domain_Name_System&quot;&gt;Wikipedia has comprehensive coverage&lt;/a&gt; of multiple concerns relating to DNS. But what was explained earlier will provide enough context for this article.&lt;/p&gt;
&lt;h2 id=&quot;running-a-local-dns-server&quot;&gt;Running a local DNS server&lt;/h2&gt;
&lt;p&gt;If there’s an always-on computer – whether that’s a spare computer or Raspberry Pi – a DNS server can run on it and provide DNS capabilities for the local network. &lt;a href=&quot;http://www.thekelleys.org.uk/dnsmasq/doc.html&quot;&gt;Dnsmasq&lt;/a&gt; is a lightweight but powerful DNS server that has been around for a long time. Many hobbyists use Dnsmasq for their home environments since it’s quite simple to configure and get going. One minimal text file is all that’s needed for configuring a functional DNS service.&lt;/p&gt;
&lt;p&gt;I chose to run Dnsmasq on my always-on server in a Docker container. When configuring Dnsmasq, for each device that I wanted to provide a domain name for, I added a line in the configuration mapping its IP address to the name I wanted to give it. For example, my router which lives at &lt;code&gt;192.168.1.1&lt;/code&gt; was assigned &lt;code&gt;router.home.mysite.com&lt;/code&gt;, and my server which lives at &lt;code&gt;192.168.1.2&lt;/code&gt; was assigned &lt;code&gt;server.home.mysite.com&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I then configured my router’s DHCP to tell all clients to use the DNS provided by the server (contact &lt;code&gt;192.168.1.2&lt;/code&gt; for DNS), and configure some manually networked devices to explicitly use the DNS provided by the server. Now on all of my devices I can type in &lt;code&gt;server.home.mysite.com&lt;/code&gt; anywhere I would type &lt;code&gt;192.168.1.2&lt;/code&gt; – so much nicer compared to having to type in an entire IP address.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;nslookup&lt;/code&gt; and &lt;code&gt;dig&lt;/code&gt; are both common command line tools to query the Domain Name System. They are often found already available on many Linux and Unix operating systems, or a straightforward install away. Using these tools can help with inspecting and debugging DNS setups. Here’s an example query using &lt;code&gt;nslookup&lt;/code&gt; to find &lt;code&gt;google.com&lt;/code&gt;:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;sh&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;$&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; nslookup&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; google.com&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;Server:&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;          192.168.1.2&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;Address:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;        192.168.1.2#53&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;Non-authoritative&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt; answer:&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;Name:&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;   google.com&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#B392F0&quot;&gt;Address:&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt; 172.217.1.174&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first &lt;code&gt;Server&lt;/code&gt; and &lt;code&gt;Address&lt;/code&gt; denote the DNS server that was used to find the IP address for &lt;code&gt;google.com&lt;/code&gt;. In this case, it was the Dnsmasq DNS server running on my home server. &lt;code&gt;Name&lt;/code&gt; and &lt;code&gt;Address&lt;/code&gt; at the bottom signify the actual response we’re interested in. In this case, &lt;code&gt;172.217.1.174&lt;/code&gt; is the IP address I get whenever I go to &lt;code&gt;google.com&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&quot;the-configuration&quot;&gt;The configuration&lt;/h2&gt;
&lt;p&gt;I use &lt;a href=&quot;https://docs.docker.com/engine/&quot;&gt;Docker&lt;/a&gt; as a way to simplify the configuration and running of different services. Specifically, I use &lt;code&gt;&amp;#x3C;a href=&quot;https://docs.docker.com/compose/&quot;&gt;docker-compose&amp;#x3C;/a&gt;&lt;/code&gt; to define the Dnsmasq Docker image to use, which ports should be opened, and where to find its configuration. Here’s the &lt;code&gt;docker-compose.yml&lt;/code&gt; file I use:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/d13e5bac6413213e49f3d65ea0e9a0e9.js&quot;&gt;&lt;/script&gt;The docker-compose file defines one `dns` service that uses the base image of `&lt;a href=&quot;https://hub.docker.com/r/strm/dnsmasq&quot;&gt;strm/dnsmasq&lt;/a&gt;`, as its one of the more popular Dnsmasq images available on [hub.docker.com](https://hub.docker.com/). The `volume` option specifies that we map a config file located alongside the `docker-compose.yml` file at `config/dnsmasq.conf` into the container’s filesystem at `/etc/dnsmasq.conf`. This is done to allow the container to be recreated at any time while keeping the same configuration. Networking-wise, TCP and UDP port 53 are exposed (yes, DNS operates over TCP sometimes). The `network-mode` is set to the host’s network (Dnsmasq just doesn’t work without this). And lastly, the `NET_ADMIN` capability so that we can use privileged ports below 1024. The last option `restart`, (one of my favourite features of docker-compose) is to keep the container running even when the host reboots or the container dies.
&lt;p&gt;All of these &lt;code&gt;docker-compose.yml&lt;/code&gt; options can be understood in more detail in &lt;a href=&quot;https://docs.docker.com/compose/compose-file/&quot;&gt;Docker’s reference docs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;More importantly, here’s the &lt;code&gt;dnsmasq.conf&lt;/code&gt; file I use to actually configure Dnsmasq’s DNS capabilities:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/18bc271a9f7daadb5b5148d159430ad7.js&quot;&gt;&lt;/script&gt;A lot of these settings were based off of the [following blog post](https://web.archive.org/web/20180829053750/http://kb.kristianreese.com/index.php?View=entry&amp;#x26;EntryID=171). Many of these options can be looked up online in [the official documentation](http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html), therefore I will focus on the ones relevant to this article.
&lt;p&gt;I have my Ubiquity router handle providing DHCP for my network, therefore the &lt;code&gt;no-dhcp-interface=eno1&lt;/code&gt; is set here to not provide any DHCP services to the local network, as &lt;code&gt;eno1&lt;/code&gt; is the interface my server uses to connect to the network.&lt;/p&gt;
&lt;p&gt;When Dnsmasq needs to find the DNS record for something that it doesn’t know, it performs a request to an upstream DNS server. &lt;code&gt;server&lt;/code&gt; is used for this and can be specified multiple times to provide redundancy in case one of these DNS servers are down. I’ve specified both the Google and Cloudflare DNS servers. In addition to this, the &lt;code&gt;all-servers&lt;/code&gt; option results in all defined &lt;code&gt;server&lt;/code&gt; entries being queried simultaneously. This has the benefit that one DNS server may respond quicker than the others, resulting a net-faster response to the DNS query.&lt;/p&gt;
&lt;p&gt;The most important part of this &lt;code&gt;dnsmasq.conf&lt;/code&gt; configuration file are the last lines defined in the file that start with &lt;code&gt;address=&lt;/code&gt;. This is Dnsmasq’s way to declare DNS mappings. For example, any device on my network performing a request for &lt;code&gt;server.home.mysite.com&lt;/code&gt; will have &lt;code&gt;192.168.1.2&lt;/code&gt; returned.&lt;/p&gt;
&lt;p&gt;The really cool thing with DNS is that &lt;em&gt;subdomains&lt;/em&gt; for any of these records return the same IP, unless declared explicitly otherwise. An example of this is &lt;code&gt;blog.apps.home.mysite.com&lt;/code&gt; doesn’t exist in the configuration file, but performing a DNS request for it will return &lt;code&gt;192.168.1.2&lt;/code&gt;. This has the effect that &lt;em&gt;“multiple services”&lt;/em&gt; can each have its own domain name, but all be served by the same IP address.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Hopefully this article gives a background about what DNS is, how it can be useful in a home environment, and how to setup and operate a Dnsmasq DNS server. A future post will build on top of the DNS functionality that has been setup here to provide multiple HTTP services running on separate domain names, all served by the same server, for the home network to use.&lt;/p&gt;</content:encoded></item><item><title>Twenty Six</title><link>https://jonsimpson.ca/twenty-six/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-six/</guid><description>Twenty Six</description><pubDate>Thu, 01 Oct 2020 21:47:05 GMT</pubDate><content:encoded>&lt;p&gt;2020 is the year I turn 26. October 1st is that day! As always with every year for the past seven years now I reflect back on the past year by sharing some accomplishments and progress with life.&lt;/p&gt;
&lt;h2 id=&quot;travel&quot;&gt;Travel&lt;/h2&gt;
&lt;p&gt;Given Covid-19, things have been different but manageable. Before the craziness started I travelled to Nashville in November for RubyConf (a conference for developers) with a number of teammates. We had a great time exploring the city and sampling the different foods over the few days we were there. I never realized how much of a party city Nashville is. The live music and bar scene wants me to go back again with friends. I’ll have to grab a cowboy hat next time I’m there.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2020/10/IMG_20200930_093121_333.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;PEI lighthouse&lt;/figcaption&gt;In the new year there was a work trip to Montreal for a couple days. Great times were spent getting to know new colleagues from our greater team, and having fun with existing. The city never gets boring. This trip is my best celebrity claim to fame: at a speakeasy in old Montreal, Harrison Ford (of Bladerunner, Star Wars) showed up with a few people and had a discreet time. They didn’t want any attention, therefore the group I was with and myself weren’t able to actually meet him. Oh well, at least he walked around a bit so that we could try to remember him a whole lot better.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Right before things started shutting down in Ottawa at the end of February, my mom and sister gave me a surprise visit. The highlights were hitting up the town with my friends and going to some new restaurants. Recommendations are for the Duelling Pianos event at the Sens House Saturday nights, and breakfast at the Manx on Elgin.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2020/10/IMG_20200912_185259_949.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Isolating in Nova Scotia&lt;/figcaption&gt;Part way through the pandemic, I had the opportunity to travel out east to Nova Scotia with a few friends out to their family’s beach house. We had to quarantine for two weeks while out there but that wasn’t too difficult when the weather was hot, the beach was there, and light beer was plentiful. It was so great the first time that we decided to go back a second time (including a second quarantine) for an entire month. During these two trips I attended one of my best buddy’s wedding, his bachelor party, made apple cider, experienced the east coast cultural norms, and rekindled my love of rocks. Home base was Merigomish, Nova Scotia, but we also stayed in Fredericton, Moncton, Charlottetown, and French River, PEI. It was surreal to experience the relaxed Covid restrictions out in the eastern bubble. I’m thankful that I was able to work remotely while out there and it having no impact on my work. I’m glad to have been able to travel during the pandemic.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;fitness&quot;&gt;Fitness&lt;/h2&gt;
&lt;p&gt;To stay fit throughout the winter I invested in a smart trainer for my bike. Tied with the virtual cycling app, Zwift, I started crushing seasons of Brooklyn Nine-Nine while keeping my fitness up.&lt;/p&gt;
&lt;p&gt;With Fridays being days off during the summer months at Shopify, this gave my friends and I a lot of great summer days to get up to trouble. Many of the days were spent cycling around Ottawa and going to different beaches in the area. We frequented Aylmer beach on the Quebec side. It was a scenic hour long cycle and great sandy beach, perfect for very hot days. One time we went out to Sablon Beach for hanging out at the large beach and camping over for a night. With all this travel and sun, it’s been satisfying to work on getting a nice, even tan.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2020/10/IMG_20200612_183849_255.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Many cycling trips into Gatineau Park&lt;/figcaption&gt;A few friends and I signed up to run the Ottawa Race Weekend 10k. The last time I did this was in 2017! The official race was cancelled, but could still be run any time over the summer, and anywhere to still get ranked. On the last day to submit the results I was running a 5k and decided to see how much further I could go. Pacing myself and keeping good enough form allowed me to run the 10k successfully! and in a fair time, albeit with some breaks.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;work&quot;&gt;Work&lt;/h2&gt;
&lt;p&gt;At work my team and I have transitioned to working in a different problem space. It’s quite a refreshing feeling being immersed in a little-known area as it keeps me on my toes. There was a several month period where I was leading two teams of developers on two different projects – one being the old team and project that is wrapping up, the other being the new team and project. This was challenging as my time was split between the two teams. Prioritizing, delegating, and providing the right nudges to influence the projects were critical throughout this period, especially when one team was coming up to a big launch, and the other team was trying to get off the ground. At the end of the day, the launch was wildly successful, and the new team is just about to ship its first version of the service we’re building.&lt;/p&gt;
&lt;p&gt;Earlier in 2020, our greater team moved from the Support org into the Trust org. We’re still solving problems for Support, but our scope has expanded to accelerate the rest of the business. This is a great opportunity for us that hasn’t fully come to fruition just yet. A lot of our services can be leveraged by other teams. The mindset we have now is building services that provide a whole lot of leverage and speed to other teams. Every year things change a whole lot on my team for the better. Three years in now and it’s still a wild ride.&lt;/p&gt;
&lt;h2 id=&quot;numbers&quot;&gt;Numbers&lt;/h2&gt;
&lt;p&gt;As always, here’s some numbers on what I’ve accomplished over the year:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;110 km of running, 11 hours total&lt;/li&gt;
&lt;li&gt;1,039 km of cycling, 47 hours total&lt;/li&gt;
&lt;li&gt;1,010 Github contributions from work and personal projects&lt;/li&gt;
&lt;li&gt;3 books read, 5 on the go&lt;/li&gt;
&lt;li&gt;6 posts published on this blog, 5 unpublished&lt;/li&gt;
&lt;li&gt;2,373,358 steps, 1,642.99 km, 868,256 calories recorded via my Fitbit&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Looking back, my running and cycling rival my 2017 numbers. It’s been a great time outside this year!&lt;/p&gt;
&lt;p&gt;🍻 to another year!&lt;/p&gt;</content:encoded></item><item><title>Building a homelab - a walk through history and investing in new hardware</title><link>https://jonsimpson.ca/building-a-homelab-a-walk-through-history-and-investing-in-new-hardware/</link><guid isPermaLink="true">https://jonsimpson.ca/building-a-homelab-a-walk-through-history-and-investing-in-new-hardware/</guid><description>Building a homelab - a walk through history and investing in new hardware</description><pubDate>Sun, 23 Aug 2020 16:09:14 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;This is the first post in a series of my experiences while Building a Homelab. The second post focuses on setting up a local DNS server and can be found &lt;a href=&quot;https://jonsimpson.ca/building-a-homelab-local-dns/&quot;&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I’ve had a particular interest in home computers and servers for a long time now. One of my experiences was wiring my childhood home up with CAT-5 ethernet to the rooms with TVs or computers and having them all connected to a 24 port 100 Mbps switch in the crawlspace. This was part of a master plan to provide different computers in the house with internet connection (when WiFi wasn’t as good as it is today), TVs with smart media boxes (think Apple TV, Roku, and the like but 10 years ago), and to tie it all together a home server for serving media storing files.&lt;/p&gt;
&lt;p&gt;The magazine &lt;a href=&quot;https://en.wikipedia.org/wiki/Maximum_PC&quot;&gt;Maximum PC&lt;/a&gt; was a major source for this inspiration as they had a number of captivating DIY articles for running your own home server, media streaming devices, and home networking. The memory is a bit rough around the edges, but these projects happened around the same time and on my own dollar – all for the satisfaction of having a bleeding edge entertainment system.&lt;/p&gt;
&lt;p&gt;Around this time Windows had a product out for a year called &lt;a href=&quot;https://en.wikipedia.org/wiki/Windows_Home_Server&quot;&gt;Windows Home Server&lt;/a&gt;. It was a OS which catered towards consumers and their home needs. Some of the features it had was network file shares for storing files, computer backup and restore, media sharing, and a number of extensions available from the community. I built a $400 box to run this OS and store two hard drives. The network switch in the crawlspace was a perfect place to put this headless server. Over many years this server was successfully used for computer backups, file storage, network bandwidth monitoring, and media serving to a number of PCs and media streaming boxes attached to TVs.&lt;/p&gt;
&lt;p&gt;Two of the TVs in the house had these &lt;a href=&quot;https://en.wikipedia.org/wiki/WD_TV#WD_TV_Live&quot;&gt;Western Digital TV Live&lt;/a&gt; boxes for playing media off of the network. These devices were quite basic at the time where only Youtube, Flickr, and a handful of other services were available – lacking Netflix and the other now popular Internet streaming services. Instead, they were primarily built for streaming media off of the local network – in this case off of the home server file share. My family and I were able to watch movies and TV shows from the comfort of our couch, and on-demand. This was crazy cool at the time as most people were still using physical media (DVD/Blu-ray) and streaming media had not taken off yet. I also vaguely remember hacking one of the boxes to put on a &lt;a href=&quot;http://b-rad.cc/wdlxtv/&quot;&gt;community-built firmware&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Windows Home Server was great at the time since it offered all of this functionality out of the box with simple configuration. I remember playing with BSD-based FreeNAS on old computers and being overwhelmed at all of the extra configuration needed to achieve something that you get out of the box with Windows Home Server. Additionally, the overhead of having to administer FreeNAS while only having a vague knowledge of Linux and BSD at the time wasn’t a selling point.&lt;/p&gt;
&lt;p&gt;Now back to current times. I’m in the profession of software development, have been using various Linux distros for personal use on laptops and servers, and would now consider myself a sysadmin enthusiast. Living in my own place, I’ve been using my own Ubuntu-based laptop to run a &lt;a href=&quot;http://plex.tv/&quot;&gt;Plex&lt;/a&gt; media server and stream content to my &lt;a href=&quot;https://www.roku.com/en-ca/products/streaming-stick-plus&quot;&gt;Roku Streaming Stick+&lt;/a&gt; attached to my TV. The laptop’s 1 TB hard drive was filling up. It was also inconvenient to have this laptop constantly on for serving content.&lt;/p&gt;
&lt;p&gt;Browsing Reddit, I came across &lt;a href=&quot;https://www.reddit.com/r/homelab/&quot;&gt;r/homelab&lt;/a&gt;, a community of people interested in owning and managing servers for their own fun. Everything from datacenter server hardware to Raspberry PIs, networking, virtualization, operating systems, and applications. This subreddit gave me the idea of purchasing some decommissioned server hardware from eBay. I sat on the idea for a few months. Covid-19 eventually happened and with all my spare time I gave in to buying some hardware.&lt;/p&gt;
&lt;p&gt;After a bunch of research on r/homelab about which servers are quiet, energy efficient, extendable, and will last a number of years, I settled on a &lt;a href=&quot;https://www.dell.com/aw/business/p/poweredge-r520/pd&quot;&gt;Dell R520&lt;/a&gt; with 2 x 6 cores at 2.4 Ghz, 48 GB DDR3 RAM, 2 x 1 Gbit NICs and 8 x 3.5″ hard drive bays. I bought a 1 TB SSD as the boot drive and a refurbished 10 TB hard drive for storing data.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2020/08/IMG_20200615_093425-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The front of the Dell R520, showing the 8 3.5″ drive bays and some of the internals.&lt;/figcaption&gt;Since I intended on running the &lt;a href=&quot;https://en.wikipedia.org/wiki/ZFS&quot;&gt;ZFS filesystem&lt;/a&gt; on the data drive, many people gave the heads up that the Host Bus Adaptor (HBA) card (a piece of hardware which connects the SAS/SATA hard drives and SSDs to the motherboard) comes with the default Dell firmware. This default firmware caters towards always running some sort of hardware-based RAID setup, thus hiding the SMART status of all drives. With ZFS, accessing the SMART data for each drive is paramount for data integrity. To get around this limitation with the included HBA card, the homelab community &lt;a href=&quot;https://fohdeesha.com/docs/perc/&quot;&gt;has some unofficial firmware&lt;/a&gt; for it which exposes IT mode, basically a way to pass through each drive to the OS – completely bypassing any hardware RAID functionality. Some breath holding later and the HBA card now had the new firmware.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;I bought a separate HBA card with the knowledge at the time that the one that comes with the Dell R520 didn’t have any IT mode firmware from the community. I ended up being wrong after a whole lot of investigation. Thankfully I should be able to flash new firmware on this card as well and sell it back on eBay.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2020/07/IMG_20200728_193601.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;A Dell Perc H310 Mini Mono HBA (Host Bus Adaptor) used in Dell servers for interfacing between the motherboard and SAS/SATA drives.&lt;/figcaption&gt;As the hardware was all being figured out, I was also researching and playing with different hypervisors – an operating system made for running multiple operating systems on the same hardware. The homelab community often refers to &lt;a href=&quot;https://en.wikipedia.org/wiki/VMware_ESXi&quot;&gt;VMware ESXi&lt;/a&gt;, &lt;a href=&quot;https://www.proxmox.com/en/proxmox-ve&quot;&gt;Proxmox VE&lt;/a&gt;, and even &lt;a href=&quot;https://unraid.net/&quot;&gt;Unraid&lt;/a&gt;. I sampled out the first two, as Unraid didn’t have an ISO available to test with and wasn’t free.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Going through the pain of making a &lt;a href=&quot;https://www.ventoy.net/en/doc_start.html&quot;&gt;USB stick bootable&lt;/a&gt; for an afternoon, I eventually got ESXi installing on the system. Poking around, it was interesting to see that VM storage was handled by having a physical disk formatted to a VMware format specific to storing multiple VMs – &lt;a href=&quot;https://en.wikipedia.org/wiki/VMware_VMFS&quot;&gt;vmfs&lt;/a&gt;. With the goal of having one of the VMs have full control over a drive formatted with the ZFS filesystem, ESXi provides a feature called hardware passthrough which bypasses virtualization of the physical hardware. One big blocker for myself was the restriction on the free version which &lt;a href=&quot;https://www.nakivo.com/blog/free-vmware-esxi-restrictions-limitations/&quot;&gt;limits VMs to a maximum of 8 vCPUs&lt;/a&gt; – a waste of resources when having 12 CPUs and not enough VMs to utilize them.&lt;/p&gt;
&lt;p&gt;Next, I took a look at Proxmox by loading it up as a VM on ESXi. It was Debian based, which was a plus as I’m comfortable with systemd and Ubuntu systems already. The Proxmox UI appeared like it had quite a few useful features, but didn’t feel like what I needed. I was much more comfortable with the terminal, and these graphical interfaces to manage things felt more like a limitation than a benefit. I could always SSH into Proxmox and manage things there, but there’s always the aspect of learning the intricacies of how this turnkey system was setup. Who knows what was default Debian configured and what was modified by Proxmox. Not to mention, what if Docker or other software was out of date and couldn’t be upgraded? This would be an unnecessary limitation I could avoid if rolling my own.&lt;/p&gt;
&lt;p&gt;Lastly, I went back to my roots – &lt;a href=&quot;https://ubuntu.com/server&quot;&gt;Ubuntu Server&lt;/a&gt;. I spun up a VM of it on ESXi. Since I’m quite used to the way Ubuntu works it was comfortable knowing what I could do. There were no 8 vCPU limitations with Ubuntu Server as the host OS – I can utilize all of the server’s resources. After some thinking I realized I didn’t have any need to run any VMs at the moment. In the past I’ve managed a number of VMs using QEMU using Ubuntu Server, therefore if the need arises again I can pull it off. The reason why I’m not using any VMs is because I’m using Docker for all of my application needs. I already have a few apps running in Docker containers on my laptop that I’ll eventually transfer over to the server. Next up, ZFS on Linux has been available for a while now in Ubuntu, giving me the confidence that the data drive will be formatted with ZFS without a problem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2020/08/IMG_20200615_092738-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The internals of the Dell R520 with the thermal cover removed. Note the row of six fans across the width of the case to keep things cool.&lt;/figcaption&gt;In the end I scrapped the idea of running a hypervisor such as EXSi and running multiple VMs on top of it because my workloads all live in Docker containers instead. Ubuntu Server is more suitable since I am able to configure everything from a SSH console. If I may conjecture why the r/homelab community loves their VMs, it may be because many of the hobbyists are used to using them for their day-jobs. There were a handful of folks who did run their own GUI-less, no-VM setups, but it was the minority.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;In the end, Ubuntu Server 20.04 LTS was installed on a 1 TB SSD boot drive. A 10 TB HDD was formatted with ZFS in a single drive configuration. Docker daemon was installed from its official Apt repo, and a number of other non-root processes were installed from Nix and Nixpkgs.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;There’s a few more things I want to discuss regarding the home server. Some of those include using Nix and Nixpkgs in a server environment and some of the difficulties, setting up a local DNS server to provide domain name resolution for devices on the network and in Docker containers, a reverse proxy for the webapps running in Docker containers using the &lt;a href=&quot;https://caddyserver.com/&quot;&gt;Caddy&lt;/a&gt; webserver, and some DataDog monitoring.&lt;/p&gt;
&lt;p&gt;In the future I have plans to expand the amount of storage while at the same time introducing some redundancy with ZFS RAIDz1, diving into being able to remotely access the local network via VPN or some other secure method, and better monitoring for uptime, ZFS notifications, OS notifications, and the like.&lt;/p&gt;</content:encoded></item><item><title>Nix-ify your environment</title><link>https://jonsimpson.ca/nix-ify-your-environment/</link><guid isPermaLink="true">https://jonsimpson.ca/nix-ify-your-environment/</guid><description>Nix-ify your environment</description><pubDate>Sat, 23 May 2020 17:33:01 GMT</pubDate><content:encoded>&lt;p&gt;Over some vacation I put a bunch of effort into rebuilding my dotfiles and other environment configuration using &lt;a href=&quot;https://github.com/rycee/home-manager/blob/master/README.md&quot;&gt;home-manger&lt;/a&gt;, &lt;a href=&quot;https://nixos.org/nix/&quot;&gt;Nix&lt;/a&gt;, and the wealth of packages available in &lt;a href=&quot;https://nixos.org/nixpkgs/&quot;&gt;Nixpkgs&lt;/a&gt;. Previously, all of my system’s configuration was bespoke, unversioned files and random commands run to bring it to its current state. This has worked fine for a number of years, but has some drawbacks such as not being easily reproducible and portable between other systems.&lt;/p&gt;
&lt;p&gt;Our developer acceleration team at Shopify is exploring the feasibility of Nix to solve a number of problems when it comes to supporting development environments for hundreds of software developers. &lt;a href=&quot;https://github.com/burke&quot;&gt;Burke Libbey&lt;/a&gt;, who is spearheading a lot of Nix exploration on the developer acceleration team at Shopify, has a number of excellent resources, two of which are public that have inspired me to look into Nix on my own and write this article. He’s created a number of &lt;a href=&quot;https://www.youtube.com/channel/UCSW5DqTyfOI9sUvnFoCjBlQ&quot;&gt;Nix related youtube videos&lt;/a&gt;, and an article on the &lt;a href=&quot;https://engineering.shopify.com/blogs/engineering/what-is-nix&quot;&gt;Shopify Engineering blog diving into what Nix is&lt;/a&gt;. I won’t go into detail about what Nix is in this article as these resources can help. Instead, I’ll focus on some learnings I’ve had over my time switching to using home-manger, using the Nix language, and the Nix package manager.&lt;/p&gt;
&lt;h2 id=&quot;home-manager&quot;&gt;home-manager&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/rycee/home-manager/blob/master/README.md&quot;&gt;Home-manager&lt;/a&gt; is a program built on top of Nix which makes it simple for a user to manage their environment. Home-manager has a long list of applications which it natively supports configuring, as well as the flexibility to configure programs not yet managed by home-manager. Configuring home-manager is generally as simple as settings a number of key-value pairs in a file. Home-manager then deals with installing, uninstalling, and configuring everything for you from a few simple commands. For example, here’s a simplified version of the home-manager config which installs and configures a few packages and plugins:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/4b10379caf90f6c44e3a25b8b1f81b3f.js&quot;&gt;&lt;/script&gt;Here is [my full home-manager config](https://github.com/jonniesweb/dotfiles/blob/0b1f9f8ce1b1cc1a965c0408fdbd583b0f0d6479/home-manager/home.nix) for reference.
&lt;p&gt;Some of the biggest factors that sold home-manager to me was easily being able to version my environment’s configuration, installing neovim and all the plugins I use by only specifying the name of the plugin, integrating fzf into my shell with only two config options, zsh installed and configured with plugins, and lastly having an escape hatch to specify custom options in my zshrc and neovim config.&lt;/p&gt;
&lt;p&gt;All of this configuration is now versioned, and any edits I make to my home-manager config or associated config files just require one &lt;code&gt;home-manager switch&lt;/code&gt; to be run to update my entire environment. If I want to try out some new vim plugins, a different shell, or someone else’s home-manager configuration, I can safely modify my configuration and know that I can revert back to the version I have stored in Git.&lt;/p&gt;
&lt;h2 id=&quot;home-manager-tips&quot;&gt;home-manager tips&lt;/h2&gt;
&lt;p&gt;I found the manpages for home-manager to be greatly useful at seeing which configuration options there are as well as what they do, what types it takes, etc. This can be accessed via &lt;code&gt;man home-configuration.nix&lt;/code&gt;. I would always have it open when modifying my home-manager configuration.&lt;/p&gt;
&lt;p&gt;By default home-manager stores its configuration in &lt;code&gt;~/.config/nixpkgs/home.nix&lt;/code&gt;. Home-manager provides an easy way to jump right into editing this file: &lt;code&gt;home-manager edit&lt;/code&gt;. Since this configuration file isn’t in the nicest of places we can change the location of it and still have home-manager pick it up. The best way to configure this would be to use home-manager to configure itself by setting &lt;code&gt;programs.home-manager.path = &quot;~/src/github.com/jonniesweb/dotfiles/home-manager/home.nix&quot;;&lt;/code&gt;, or wherever your configuration file exists. If needed, the &lt;code&gt;HOME_MANAGER_CONFIG&lt;/code&gt; environment variable can be set with the same value to tell the &lt;code&gt;home-manager&lt;/code&gt; command where the config exists if something goes wrong.&lt;/p&gt;
&lt;p&gt;In the switchover I challenged myself to change over from vim to &lt;a href=&quot;https://neovim.io/&quot;&gt;neovim&lt;/a&gt;. This didn’t involve too much effort as my vim config needed a few updates to be compatible with neovim. A large amount of time was saved by the automatic install of the various vim plugins I use.&lt;/p&gt;
&lt;p&gt;In the process I also moved away from using oh-my-zsh to plain old zsh. A fair amount of time was spent understanding the different &lt;a href=&quot;http://zsh.sourceforge.net/Doc/Release/Options.html&quot;&gt;zsh shell options&lt;/a&gt; and which ones oh-my-zsh provided me with. More time was spent configuring my shell’s prompt to use a plugin offering git information and its own theme. Oh-my-zsh does a fair amount of magic in the background at plugin and theme loading, but when looking at it’s source code, it’s actually &lt;a href=&quot;https://github.com/ohmyzsh/ohmyzsh/blob/8b51d17c469a7bafa1193d8af2a52e0d4c645eef/oh-my-zsh.sh#L104-L110&quot;&gt;incredibly&lt;/a&gt; &lt;a href=&quot;https://github.com/ohmyzsh/ohmyzsh/blob/8b51d17c469a7bafa1193d8af2a52e0d4c645eef/oh-my-zsh.sh#L119-L127&quot;&gt;simple&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A lot of language, tools, and other dependencies were left out of my home-manager’s config since Shopify’s internal dev tools handles the majority of this for us on a per-project basis.&lt;/p&gt;
&lt;p&gt;If you’re having home-manager manage your shell, don’t forget to set the &lt;code&gt;xdg.enable = true;&lt;/code&gt; option in your config. Some programs depend on the &lt;code&gt;XDG_*_HOME&lt;/code&gt; environment variables to be present. I can see why this option isn’t enabled by default as many operating systems may have values that differ from the ones defaulted to by home-manager.&lt;/p&gt;
&lt;p&gt;My main development environment is on OS X and therefore differs from Linux in some areas. One of the projects I’m going to keep my eye on is &lt;a href=&quot;https://github.com/LnL7/nix-darwin&quot;&gt;nix-darwin&lt;/a&gt; which appears to be solving the problem that &lt;a href=&quot;https://nixos.org/nixos/&quot;&gt;NixOS&lt;/a&gt; solves for Linux: complete system configuration.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Similar to Docker, Canonical’s snap packages, and Nix ecosystems, we’re going to see a steady increase in the number of companies and individuals utilizing these technologies for their use cases of explicitly defining what software runs on their systems. Docker is already gained critical mass throughout enterprises, Canonical’s Snap packages are slowly picking up steam on Ubuntu-based systems, and Nix appears to be breaking into the scene. I’m rooting for Nix as it has a leg up on other systems with its complete and explicit control over all components which go into making up a program or even a complete system. I’m excited to see how much it will catch on.&lt;/p&gt;</content:encoded></item><item><title>Replacing the engine while driving: planning safety measures into the project</title><link>https://jonsimpson.ca/replacing-the-engine-while-driving-planning-safety-measures-into-the-project/</link><guid isPermaLink="true">https://jonsimpson.ca/replacing-the-engine-while-driving-planning-safety-measures-into-the-project/</guid><description>Replacing the engine while driving: planning safety measures into the project</description><pubDate>Sun, 03 May 2020 23:37:32 GMT</pubDate><content:encoded>&lt;p&gt;When a project is created for changing out an existing dependency with another one, there are many factors that should be thought of. If there is a high risk involved if something goes wrong, then more caution would need to be taken. If a lot of effort is involved to add the new dependency, then it may make sense to incrementally ship the new dependency alongside the old dependency. Lastly, depending on the complexity, will a lot of work be done up front to integrate the new dependency, or will it be hacked in then cleaned up after the launch?&lt;/p&gt;
&lt;p&gt;These are some of the questions that can be asked to help guide the planning for a project which changes out one dependency for another. The same pattern can be used for similar types of work besides dependencies such as service calls, database queries, integrations, and components, to name a few.&lt;/p&gt;
&lt;p&gt;Generally, a project like this can have the following steps:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Investigate&lt;/li&gt;
&lt;li&gt;Refactor&lt;/li&gt;
&lt;li&gt;Add the new dependency&lt;/li&gt;
&lt;li&gt;Launch&lt;/li&gt;
&lt;li&gt;Remove the old dependency&lt;/li&gt;
&lt;li&gt;Refactor&lt;/li&gt;
&lt;li&gt;Celebrate&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Not all of these steps need to be performed in this order or at all. For example, the &lt;em&gt;launch&lt;/em&gt;, &lt;em&gt;add the new dependency&lt;/em&gt;, and &lt;em&gt;remove the old dependency&lt;/em&gt; can all be completed at once if the risk and simplicity allows for it.&lt;/p&gt;
&lt;p&gt;Refactoring may be redundant to mention given developers should be refactoring as they normally make changes to the codebase. I am being explicit about mentioning it here so that the team can use deliberate refactoring to their advantage to make the new dependency easy to add, as well as removing any unnecessary abstractions or code leftover from the old dependency.&lt;/p&gt;
&lt;p&gt;A special shoutout goes to the celebrate step, since I certainly know that teams can be eager to move onto the next project and forget to appreciate the hard work put into achieving its success.&lt;/p&gt;
&lt;p&gt;Now lets get into the different concerns that can apply to projects like this, and the practices that can help it succeed.&lt;/p&gt;
&lt;h2 id=&quot;concerns-and-practices&quot;&gt;Concerns and Practices&lt;/h2&gt;
&lt;p&gt;The most important step every project should have, the investigation step informs how the rest of the project should work. During this step as much information should be gathered to give a confident enough plan for the rest of the project to proceed. Some of the actions taken during this step are to understand how the current system works, how the dependency to be replaced is integrated, how critical this dependency is to the businesses operations, how the new dependency should be integrated, and any cleanup and refactorings that should be made before, during, and after the new dependency is added and launched.&lt;/p&gt;
&lt;p&gt;A big topic to explore is what the system would look like if the system was originally built using this new dependency. This mental model forces the envisioning of a clean integration with the new dependency, ignoring any sort of legacy code, and free of any constraints within the system. This aligns the team with the ultimate design of the system that uses this new dependency. The team should strive and fight for reaching this state at the end of the project since it can result in the cleanest and most maintainable code. If this is not part of the end goal for the project then the system may carry forward unnecessary code and bad abstractions that result in tech debt piling up for future developers.&lt;/p&gt;
&lt;p&gt;Another big consideration is what will the system look like when it is launched? Will the new and old dependency both have to coexist in the system so that there can be a gradual transition? Or maybe the change is small enough that the old dependency can be removed and the new one added in one change and no user interruption will occur? If there does exist a period where the two dependencies need to coexist then how can this behaviour be implemented to fit in with the ultimate design discussed earlier? This may involve doing a lot of refactoring up front to get the system into a state that fits the ultimate design at the end of the project. There also exists the tradeoff that the new dependency can be hastily integrated into the system alongside the other dependency, then all of the hacks can be undone with a sufficient cleanup and refactoring after the launch. Both methods have different tradeoffs: with the up front refactoring, the new dependency is integrated the right way into the system, but may require a lot of refactoring of the surrounding and old dependency’s code. On the other hand, hacking the new dependency into the same pattern that the old dependency uses gets the project faster to launch, but can lead to bugs and integration troubles if the interfaces or behaviour are different. Regardless, at the end of the project, the system should look as if the old dependency never existed.&lt;/p&gt;
&lt;p&gt;How much risk is involved with switching over to use the new dependency? If it is very risky then more precautions should be taken to reduce the risk. If there is very little risk then little to no precautions can be taken, and the project can move much faster. Some methods I have used to help with reducing risk are collecting metrics on the different types of requests and responses between the two dependencies. These metrics can be early warning systems to the new dependency behaving incorrectly. Being able to rollback to the old dependency via a deploy or a feature flag provides the flexibility to quickly switch back to the old dependency in case things go wrong. Dark launching the new dependency to production has been a practice I often encourage my team to use, since it allows for testing out the new code in the production environment without affecting real users. Lastly, beta testing with a percentage of users can also reduce the impact since if something goes wrong with the new dependency, only a fraction of the users are affected. Many of these practices are complimentary and can be used together to achieve the best mix of handling risk.&lt;/p&gt;
&lt;p&gt;How much effort is involved to add the new dependency? Effort could mean the amount of changes to make to the code or the time involved. If there is a significant amount of effort then it absolutely makes sense to incrementally ship small changes to production. Even if the changes are not live code paths, at least the team can review each change, provide feedback, and course correct if needed. I have seen in small effort projects all work was deployed to production in one pull request. On large effort projects the work was split across many pull requests written and deployed over time by a team. This latter case enabled the team to dark launch the new dependency to production and had the added safety to switch back to the old dependency if needed.&lt;/p&gt;
&lt;p&gt;Given the considerations and practices discussed throughout this article, it is best to validate that they will actually work when it comes time to execute. If a team is experienced with these projects and practices then the investigation period can be quicker, otherwise if the team is less experienced with these projects the investigation should be more substantial. Building a prototype can help build confidence in the assumptions made about running the project and guide the team going forward. A good prototype proves or disproves assumptions in as minimal time as possible. Once the team is confident in their plan, start the project, and do not be afraid to reevaluate the choices already made as the project goes on.&lt;/p&gt;</content:encoded></item><item><title>RubyConf 2019 Talks - Day 2</title><link>https://jonsimpson.ca/rubyconf-2019-talks-day-2/</link><guid isPermaLink="true">https://jonsimpson.ca/rubyconf-2019-talks-day-2/</guid><description>RubyConf 2019 Talks - Day 2</description><pubDate>Sun, 08 Dec 2019 22:02:29 GMT</pubDate><content:encoded>&lt;p&gt;Here’s a continuation of the &lt;a href=&quot;https://jonsimpson.ca/rubyconf-2019-talks-day-1/&quot;&gt;previous post covering day 1&lt;/a&gt;, this one instead on the talks I attended for Day 2 of RubyConf 2019! Headings are linked to a video of the talk.&lt;/p&gt;
&lt;h2 id=&quot;injecting-dependencies-for-fun-and-profit&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=b5vfNcjJsLU&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=44&amp;#x26;t=0s&quot;&gt;Injecting Dependencies For Fun and Profit&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Chris Hoffman discussed the basics and the benefits of dependency injection, mentioning that it’s an alternative to mocking when testing. The benefit of dependency injection is that all of the classes using dependency injection list their dependencies explicitly in the initializer. This benefits people who go to read that code later, especially new devs to the team, since all of a classes dependencies are centralized in one location. It also makes testing classes in isolation easier since test doubles of the dependencies can be passed into the classes initializer, compared to the implicit method of mocking objects which can lead to dependencies being forgotten deep in the class.&lt;/p&gt;
&lt;p&gt;One of the interesting patterns that Chris’ company adopted (and that I don’t necessarily agree with) to manage dependencies with dependency injection in their codebase is to have a dependency god object. This object is initialized at the start of the program and contains a reference to each dependency in their system. This dependency object is then passed by reference into each classes initializer. When a class needs a dependency it refers to the dependency in the dependency god object. This appears to be a purely functional way of using dependency injection compared to the more popular solution of using globally accessible dependency objects. &lt;a href=&quot;https://dry-rb.org/gems/dry-auto_inject/&quot;&gt;&lt;code&gt;dry-rb&lt;/code&gt;‘s &lt;code&gt;auto_inject&lt;/code&gt;&lt;/a&gt; is a common dependency injection library which uses the globally accessible dependency pattern.&lt;/p&gt;
&lt;p&gt;Overall, dependency injection is a great pattern for scaling medium to large codebases and making testing components simpler.&lt;/p&gt;
&lt;h2 id=&quot;the-fewer-the-concepts-the-better-the-code&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=unpJ9qRjdMw&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=31&amp;#x26;t=0s&quot;&gt;The Fewer the Concepts, the Better the Code&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;David Copeland presented the idea of programming with fewer concepts for better code comprehension between many developers. This talk was a bit of a shock since it goes against many of my ideals, but I fully enjoyed challenging my beliefs on the subject. For context, David’s team was just himself and a lead with a non-computer-science background at a small company. When David’s code was reviewed by his lead, the code was critiqued to be simpler and easier to understand. Over time David figured out that using more generic programming language concepts, such as &lt;code&gt;for&lt;/code&gt;, &lt;code&gt;if&lt;/code&gt;, &lt;code&gt;return&lt;/code&gt;, etc. common to most procedural programming languages was what his lead was pushing him towards.&lt;/p&gt;
&lt;p&gt;The talk then went into an example of some code which a Rubyist would have written with &lt;code&gt;each&lt;/code&gt;, &lt;code&gt;map&lt;/code&gt;, implicit returns, etc. contains many more concepts that a developer would have to know about compared to the same code written with much fewer concepts. An example of the benefit of writing code to use these generic programming language concepts is that learning to use new programming languages can be much simpler since they all generally have the same shared concepts. Onboarding new developers onto the team can be much faster if the dev only has to understand a small subset of programming language features. The Go programming language was compared to this practice as it has a smaller number of concepts than other programming languages.&lt;/p&gt;
&lt;p&gt;At the end of the talk &lt;a href=&quot;https://www.youtube.com/watch?time_continue=1911&amp;#x26;v=unpJ9qRjdMw&quot;&gt;I asked the question&lt;/a&gt; about whether this style of programming may outweigh benefits by making it easier to introduce more bugs. Using functional programming language features such as the &lt;code&gt;&amp;#x3C;a href=&quot;https://ruby-doc.org/core/Enumerable.html&quot;&gt;Enumerable&amp;#x3C;/a&gt;&lt;/code&gt; collection of functions in Ruby can make code much easier to reason about. David agreed that more bugs are definitely a possibility, but he doesn’t have anecdotal evidence from his team.&lt;/p&gt;
&lt;h2 id=&quot;disk-is-fast-memory-is-slow-forget-all-you-think-you-know&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=crbyeyPS7HE&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=33&amp;#x26;t=0s&quot;&gt;Disk is Fast, Memory is Slow. Forget all you Think you Know&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Another controversial talk I wanted to challenge my beliefs with, this time challenging the principle of memory not always being faster than disk. Daniel Magliola presented this conundrum in the form of a improvement he was attempting to make. The improvement was making metrics available for his cluster of forking Unicorn processes. When using Prometheus to collect metrics from apps, it queries each app at a specific endpoint to read in the metrics and their values. The problem with forking web servers is that when the request comes in to return all of the metrics, the request is dispatched to one of the Unicorn processes, only returning that process’ metrics, not the group of forked Unicorn processes as it should.&lt;/p&gt;
&lt;p&gt;Daniel went down the rabbit hole on this &lt;a href=&quot;https://github.com/prometheus/client_ruby/issues/9&quot;&gt;GitHub issue&lt;/a&gt; looking for performant ways to support metrics collection for forking webservers. With the goal of keeping the recording of metrics as close to 1 microsecond as possible, some solutions that were investigated involved storing metrics in Redis, the Ruby class PStore which transactionally stores a hash to a file, and tenderlove/mmap library to share a memory mapped hash to each process. Unfortunately none of the potential solutions could beat 1 microsecond.&lt;/p&gt;
&lt;p&gt;The solution Daniel discovered, and expertly discussed throughout his talk was using plain old files and file locks. This solution ended up only taking ~6 microseconds per metric write and was much more reliable and simpler than dealing with mmap’ed memory, or more running infrastructure. The title of the talk was misleading, and was touched on near the end of the talk as this file-based solution benefitted from operating system optimizations made to cache writes in main memory and disk caches. According to the program the file was updated successfully to disk, with proper locking to prevent multiple writers tripping over each other, but this was all possible by the performant abstraction our modern operating systems provide us with.&lt;/p&gt;
&lt;h2 id=&quot;digging-up-code-graves-in-ruby&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ffrv-JppavY&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=50&amp;#x26;t=0s&quot;&gt;Digging up Code Graves in Ruby&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Noah Matisoff went into how code graves, aka dead code comes about. Oftentimes developers can be modifying existing parts of code and stop calling other pieces of code, either in the same or different file. That code may still have tests, so test code coverage metrics can’t really help here. Feature flags, where 100% of users are going through one code path and not the other are also prime candidates for code that doesn’t need to exist.&lt;/p&gt;
&lt;p&gt;Code coverage tools can be run in production, or in development and help give a good idea of what parts of code are never reached. Static code analysis tools can also help determine if code isn’t referenced anywhere, but it is a hard problem to solve with Ruby since the language isn’t typed and is quite dynamic. Another solution to help keep dead code out of codebases was to add todos to the codebase. Todos can be setup to remind developers to remove bits of code from the codebase or perform other actions. Some automations were shown to make todos more actionable.&lt;/p&gt;</content:encoded></item><item><title>RubyConf 2019 Talks - Day 1</title><link>https://jonsimpson.ca/rubyconf-2019-talks-day-1/</link><guid isPermaLink="true">https://jonsimpson.ca/rubyconf-2019-talks-day-1/</guid><description>RubyConf 2019 Talks - Day 1</description><pubDate>Sun, 01 Dec 2019 20:11:04 GMT</pubDate><content:encoded>&lt;p&gt;I attended RubyConf this year in Nashville, Tennessee with a few of my teammates from Shopify. What a great city and a great first time attending RubyConf!&lt;/p&gt;
&lt;p&gt;I took notes on many of the talks I attended and here are the summaries for the first of the three days. &lt;a href=&quot;https://jonsimpson.ca/rubyconf-2019-talks-day-2/&quot;&gt;Day 2 is available here&lt;/a&gt;. Headings that have links go to a video of the talk.&lt;/p&gt;
&lt;h3 id=&quot;matz-keynote--ruby-progress-report&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=2g9R7PUCEXo&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=2&amp;#x26;t=0s&quot;&gt;Matz Keynote – Ruby Progress Report&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Matz started off the conference with his talk on the upcoming Ruby 3, talking about some upcoming features with it, and the timeline. Ruby 3 will absolutely be released at the end of 2020, removing half-baked features if necessary to keep it on track. This probably also means that if the 3×3 performance goals aren’t fully met, then it’ll still be shipped. He spent some time on talking about being a Rubyist, as the majority of attendees were new to RubyConf, encouraging people to have discussions and contribute to the future of Ruby.&lt;/p&gt;
&lt;p&gt;Matz went into some of the new features going into Ruby 2.7 and Ruby 3, and some of the features or experiments being removed. Some of the biggest hype was around the addition of pattern matching, the just in time compiler (JIT), emojis (though Matz didn’t think so), type checking, static analysis, and an improved concurrency model via guilds (think Javascript workers) and fibers. Some features or experiments that were removed were &lt;a href=&quot;https://bugs.ruby-lang.org/issues/16275&quot;&gt;the &lt;code&gt;.:&lt;/code&gt; (shorthand for &lt;code&gt;Object#method&lt;/code&gt;)&lt;/a&gt;, the pipeline operator, deprecating automatic conversion of hash to keyword arguments. Some attendees were vocal about getting more rationale about removing these features, and Matz was more than accommodating to explain in more detail.&lt;/p&gt;
&lt;h3 id=&quot;no-return-beyond-transactions-in-code-and-life&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=qoriifl-z3Q&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=36&amp;#x26;t=0s&quot;&gt;No Return: Beyond Transactions in Code and Life&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Avdi Grimm’s talk focused on discussing the unlifelike constraints that are imposed on users when performing things online. For example, filling out a survey or form online may result in the user losing their progress if they exit their browser. In real life this doesn’t happen, so why should we constrain these transactions so much? Avdi recommends that when building out these processes, these transactions, that we should instead think of it as a narritive, one stream of information sharing that only requires the user to complete a step when it’s really necessary. Avdi related this to our code by suggesting a few concepts that can make our programs more narrative-like such as embracing state and history of data by utilizing event-sourcing/storming and temporal modelling, failing forwards in code by treating exceptions as data and expecting failures, and interdependence in code by using back pressure, and circuit breakers.&lt;/p&gt;
&lt;h3 id=&quot;investigative-metaprogramming&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=bJMzWumXPmo&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=37&amp;#x26;t=0s&quot;&gt;Investigative Metaprogramming&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Betsy Haibel talked about an effective way of figuring out a bug during a potentially painful upgrade of their Rails app to 6.0. Through the use of metaprogramming, she was able to fix a frozen hash modification bug that would have otherwise been quite difficult to debug. She accomplished this feat by monkey patching the &lt;code&gt;Hash#freeze&lt;/code&gt; method, saving a backtrace whenever it is called. Then in the &lt;code&gt;Hash#[]=&lt;/code&gt; method, rescue any runtime exceptions that occur and start a debugger session. This helped her narrow down exactly where the hash was frozen earlier on in the code.&lt;/p&gt;
&lt;p&gt;Besty then went into detail on what metaprogramming is, and how it differs from language to language. Java, for example has distinct loadtime and runtime phases when the application is starting up. Ruby, on the other hand is both loading classes and executing code at the same time since it’s all performed together during runtime.&lt;/p&gt;
&lt;p&gt;Lastly, the talk provided a pattern for using metaprogramming to investigate bugs or other problems in code. Through reflection, recording, and reviewing, the same pattern can be applied to help debug even the most complex code. The reflection step makes up determining what part of the code early on leads to the program failing. The moment that it occurs can be found by inspecting the backtrace at that point in time. Next is the recording step where we want to patch the code that we’ve identified from the reflection step to save the backtrace. This can be done either by saving the &lt;code&gt;caller&lt;/code&gt; to an instance variable, class variable, logging. To get a foothold into the code, the patching can be accomplished by using &lt;code&gt;Module#prepend&lt;/code&gt; or even the TracePoint library. Lastly, reviewing is the step in which we observe an event in the system (eg. an Exception) and either pause the world or log some info for further reading. An example of this would be to put in a breakpoint or debugger statement, optionally making it a conditional breakpoint to help filter through the many occurrences.&lt;/p&gt;
&lt;h3 id=&quot;ruby-ate-my-dsl&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=Ov-tMtOkKS4&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=41&amp;#x26;t=0s&quot;&gt;Ruby Ate My DSL!&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Daniel Azuma presented about what DSLs (Domain Specific Languages) are, the benefits of them, and how they work. One of the biggest takeaways from this talk was that DSLs are more like Domain Specific Ruby as we’re not building our own language, instead the user of these DSLs should fully expect to be able to use Ruby while using DSLs.&lt;/p&gt;
&lt;p&gt;Daniel also went on to mention how to build your own DSL, mentioning a few gotchas as he went. One of those was that since &lt;code&gt;instance_eval&lt;/code&gt; is used throughout implementing DSLs, that we should be aware of users clobbering existing instance variables and methods. One solution is to have a naming convention for the DSLs internal instance variables and private methods (eg. prefixing with underscore characters). Alternatively, another way of preventing this clobbering from going on is to separate the DSL objects from the implementation which operates on those objects. This then has the effect that the user of the DSL has the minimum surface area needed to set the DSL up, removing the possibility of overwriting instance variables or methods the internal DSL needs to run.&lt;/p&gt;
&lt;p&gt;Design DSLs which look and behave like classes. Specifically, whenever blocks are used, have them map to an instance of a class. RSpec is a great example of this where &lt;code&gt;describe&lt;/code&gt; and &lt;code&gt;it&lt;/code&gt; calls are blocks which create instances of classes. The &lt;code&gt;it&lt;/code&gt; call creates instances that belong to the &lt;code&gt;describe&lt;/code&gt; instance. Where things get more interesting and lifelike is if helper methods and instance variables defined higher up in a DSL can be used further down in the DSL. This is the concept of lexical scoping.&lt;/p&gt;
&lt;p&gt;Lastly, constants are a pain to work with in Ruby. They don’t behave as expected when using blocks and evals. Some DSLs provide alternatives to constants, for example RSpec’s &lt;code&gt;let&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&quot;mrubyc-running-on-less-than-64kb-ram-microcontroller&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=1VFPSHs3WvI&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=6&amp;#x26;t=0s&quot;&gt;mruby/c: Running on Less Than 64KB RAM Microcontroller&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Hitoshi HASUMI presented mruby/c, an mruby implementation focused on very resource constrained devices. Where mruby focuses on devices with 400k of memory, mruby/c is for devices with 40k of memory. Devices with this small amount of memory can be microcontrollers which are cheap to run and offer many benefits over devices which run operating systems. Some benefits are instantaneous startup and being more secure.&lt;/p&gt;
&lt;p&gt;Hitoshi focused his talk on the work he performed building out IoT devices to monitor temperatures of ingredients at a sake brewery in Japan. These devices had a way for workers to measure temperatures, display the reading, as well as send that reading back to a server for further processing. Hitoshi made it clear that there are many different thing that could go wrong in the intense environment of a brewery. High temperatures, hardware failure, resource constraints, etc.&lt;/p&gt;
&lt;p&gt;The latter half of the talk was focused on how mruby/c works and how to use it. mruby/c uses the same bytecode as mruby, but removes a few features that regular Ruby developers are used to having, namely: modules and the stdlib. mruby/c compiles down to C files and provides it’s own realtime operating system. Hitoshi finishes the talk with plugging a number of libraries and tools that he’s developed to help with debugging, testing, and generating code. Those being &lt;code&gt;mrubyc-debugger&lt;/code&gt;, &lt;code&gt;mrubyc-test&lt;/code&gt;, and &lt;code&gt;mrubyc-utils&lt;/code&gt;, respectively.&lt;/p&gt;
&lt;h3 id=&quot;statistically-optimal-api-timeouts&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=OxNL0vRsXi0&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=43&amp;#x26;t=0s&quot;&gt;Statistically Optimal API Timeouts&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Daniel Ackerman discussed the widespread use of APIs and how timeouts for those remote requests are not being configured efficiently. He introduces the problem that timeouts should be optimized for the best user experience – the fastest response. Given a slow responding API request, we should timeout if we have high confidence that the request is taking too long. &lt;em&gt;He prefixed the rest of his talk explaining that setting the timeout to the 95th percentile is a quick but accurate estimate.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Since APIs are all different, Daniel presents a mathematical proof of determining statistically optimal API request timeouts. By analyzing a histogram of the API response times, we can determine the optimal timeout that balances user experience with timing out requests. Slow API requests often mean that the service is under heavy load or not responding.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/ankane/the-ultimate-guide-to-ruby-timeouts&quot;&gt;The Ultimate Guide to Ruby Timeouts&lt;/a&gt; was mentioned as a go-to source for configuring timeouts and knowing which exceptions are raised for many commonly used libraries. Definitely a useful resource. Daniel finished his talk with a plug to his gem &lt;code&gt;rb_maxima&lt;/code&gt;, a library which makes it easy to use the Maxima algebraic system from Ruby.&lt;/p&gt;
&lt;h2 id=&quot;collective-problem-solving-in-software&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=1oeigCANJVQ&amp;#x26;list=PLE7tQUdRKcyZDE8nFrKaqkpd-XK4huygU&amp;#x26;index=7&amp;#x26;t=0s&quot;&gt;Collective Problem Solving in Software&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Jessica Kerr talked about the idea of cameratas – the concept of a group of people who discuss and influence the trends of a certain area. More formally, camerata came from the Florentine Camerata, a group of renaissance musicians and artists gathered in Florence, Italy who helped develop the genre of opera. Their work was revolutionary at the time.&lt;/p&gt;
&lt;p&gt;Jessica then related it to the great ideas that have come out of ThoughtWorks, a London-based consulting company. Their incredible contributions over the years have included the concepts of Agile, CI, CD, and DevOps to name a few, have influenced the entire software industry and has set the bar higher.&lt;/p&gt;
&lt;p&gt;In general, great teams make great people. Software teams are special in that they consist of the connections between the people in the team as well as the tools that the team uses. Jessica relates this to a big socio-technical system, introducing the term symmathesy to capture the idea that teams and their tools learn from each other. No one person has a full understanding of the systems they work on, therefore the more symmathesy going on in the team, the better the team and system is. This is similar to the concept of senior developers being able to understand the bigger picture when it comes to teams, tools, and people compared to new developers usually concerned about their small bit of code.&lt;/p&gt;
&lt;p&gt;The talk was closed by encouraging dev teams to incentivize putting the team first compared to the individual, grow teams by increasing the flow of information sharing and back and forth with their tools. Lastly, great developers are symmathesized.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://jonsimpson.ca/rubyconf-2019-talks-day-2/&quot;&gt;Summaries of Day 2’s talks are available here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title>Twenty Five</title><link>https://jonsimpson.ca/twenty-five/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-five/</guid><description>Twenty Five</description><pubDate>Thu, 03 Oct 2019 02:14:45 GMT</pubDate><content:encoded>&lt;p&gt;My friends half-jokingly call it my quarter-century birthday. This year I’m a day late, and according to previous posts, sick again at this same time of year surprisingly. It may have been from the surprise birthday party my friends threw. Oh well, lets get on with the show.&lt;/p&gt;
&lt;p&gt;It’s always hard to remember back to what October of last year was like, especially after a year like this. Around November of 2018 I had a wonderful surprise promotion: becoming my own team’s manager. With that came a whole new challenge of understanding people instead of just software. I was thrown in the deep end one day and had to figure out what managing people was all about. From talking with colleagues to reading books, I performed a lot of research over the past year to understand what it means to manage people, especially as a manager of a development team.&lt;/p&gt;
&lt;p&gt;Besides career-related achievements, I also had the great opportunity to vacation in Mexico and New York with a bunch of my close buddies, and to partake in a few extra ski trips over the winter to Mont Tremblant and Camp Fortune. I surprised myself with my skiing skills – I must not have gone very recently. Over the summer a number of visits to the cottage was made, one of the times with a bunch of good friends.&lt;/p&gt;
&lt;p&gt;Something I wasn’t expecting to have happened at all was to get my SSI Open Water Diving certification. This allows me to go scuba diving to maximum depths of 60 ft. A friend of mine suggested the idea to a few of us and we all had nothing to lose, so why not! Five pool dive sessions, three classroom sessions, and four dives at Morrison’s Quarry over a weekend gave the three of us the ability to travel anywhere around the world and go diving. It was a great learning experience as we had excellent instructors and one on one training as luck would have it. The three of us will be planning a diving trip in the new year!&lt;/p&gt;
&lt;p&gt;This was the year where more weddings made it my way. One of them was for my longtime cottage neighbour who is around my age, and another wedding was for my Aunt. Both were vastly different weddings, but both were very enjoyable.&lt;/p&gt;
&lt;p&gt;In late fall and early spring there’s a large period of time where bad weather holds me back from going running and cycling in the summertime, or skating on the canal in the wintertime. To keep myself active I bought myself an indoor trainer for my road bike. I was able to train indoors a few times a week pretty consistently over the winter. Keeping this training up for all of the cold months allowed me to jump on my bike in the spring and not miss a beat.&lt;/p&gt;
&lt;p&gt;Here’s a few interesting stats of mine from the past year:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;57 hours of running/cycling/training, 1220 km total&lt;/li&gt;
&lt;li&gt;14 articles for this blog written, 7 published&lt;/li&gt;
&lt;li&gt;7 books read – the either being In The Plex, or the Elon Musk biography&lt;/li&gt;
&lt;li&gt;1932 GitHub contributions from work and personal projects&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;🍻 to another year!&lt;/p&gt;</content:encoded></item><item><title>Accelerate your team as its Lead</title><link>https://jonsimpson.ca/accelerate-your-team-as-its-lead/</link><guid isPermaLink="true">https://jonsimpson.ca/accelerate-your-team-as-its-lead/</guid><description>Accelerate your team as its Lead</description><pubDate>Fri, 16 Aug 2019 12:20:28 GMT</pubDate><content:encoded>&lt;p&gt;Building software can be hard. Requirements can be swept under the rug, only to find out that: &lt;em&gt;Whoops. We shouldn’t have forgotten about those&lt;/em&gt;. Stakeholders requests can silently be forgotten, only to be brought up later, eroding trust. Decisions can take a long time to make if the right people are missing, and even if the room doesn’t know they have the power to. Developers may also be blocked on their work with not knowing that one critical piece of information. Who best to alleviate the previously mentioned pains other than the team’s Lead?&lt;/p&gt;
&lt;p&gt;Call the position a Lead Developer. Call it a Development Manager. Call it whatever. Even if you don’t have the title, the ability to influence and lead people to make the team’s product, people, or processes better are well needed in all development teams.&lt;/p&gt;
&lt;p&gt;As a Lead, your back is on the line when it comes to everything your team does. The glory you pass down onto the individual team members, or the entire team. The failures you have to suck up and own yourself. Since the engineering lead is on the line when it comes to the team’s output and performance, it’s a large incentive to use your experiences, skills, and contacts to supercharge your team.&lt;/p&gt;
&lt;p&gt;One of those methods of influence I have been using recently is picking up and coming to some sort of closure for decisions that haven’t been made, or information that is needed by the development team.&lt;/p&gt;
&lt;p&gt;I am of the type of Lead who will perform a gut check and directly ask a developer if they’re blocked on missing information. If the way to unblock them is clear and simple, I point them in the right direction, backing it up with whatever details about the technical, vision, or user story – all without having to reach out to the person best suited to answer. If something is of importance, where the wrong answer could waste time or affect the product in a negative way, reaching out to the person who would know the answer is often necessary. Making it your personal mission to figure that out, and report back to the dev about the answer builds trust that &lt;em&gt;yes, you the dev Lead&lt;/em&gt; can help.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Side note: If the dev is skilled enough in knowing a problem area and is able to talk with stakeholders or the people necessary to help solve their problem, encourage them to own figuring this out themselves instead of dealing with it yourself. Empowering your dev to be more independent through dealing with people they may not have met grows the number of contact they have, improves their ability to be resourceful, and can result in being more engaged with the problem. Since this may be an uncharted area for the dev, one on one time is quite valuable for talking about your report’s recent situations, helping them problem solve, and strategizing.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We are all in the 21st century working at high tech organizations – meetings are terrible since we have a wealth of different synchronous and asynchronous tools to get the same or better outcome from a meeting. Therefore I don’t like attending most meetings. Though sometimes you just have to get multiple people into a physical or virtual room and talk things through. Gaining the skills to be a meeting facilitator is very beneficial. It’s basically the practice of having an agenda, leading a meeting, keeping people on track, coming to conclusions on the talking points, and lastly creating action items. Without a meeting facilitator, it can be easy for a meeting to become taken over by one speaker or topic, leaving all other items to talk about untouched. Action items can also fall by the wayside, by either not being discussed, or people not being held accountable, which can absolutely demotivate people on the effectiveness of that meeting, especially if it’s recurring.&lt;/p&gt;
&lt;p&gt;Sometimes you might be missing one critical person in the room. It’s always painful to know that &lt;em&gt;We’re not going to get to a decisive answer on what we should do since we’re missing Jimmy&lt;/em&gt;. Getting good at honing in on this skill helps make your meetings productive, either by cancelling them to save everyone’s time, or consulting with the missing people beforehand. Giving this intuition as feedback to other people who host meetings can only help reduce this from happening in the future. No one likes wasting time.&lt;/p&gt;
&lt;p&gt;It’s one thing to have the meeting and come out feeling &lt;em&gt;Great! Everyone knows what needs to be done. Time to sit back and watch my genius planning unfold&lt;/em&gt;. Wrong. That’s half of the battle. You still have to course correct from time to time. This could mean following up on the people assigned action items to see if they need help or are blocked, freeing up devs from tasks that are of lesser of priority, and making sure the right people are being notified when action items are completed.&lt;/p&gt;
&lt;p&gt;But when the stars do align and the team gets shit done, don’t stay entirely humble. Remember to give yourself some credit for accelerating the team.&lt;/p&gt;</content:encoded></item><item><title>Key-Value Pairs in GraphQL</title><link>https://jonsimpson.ca/key-value-pairs-in-graphql/</link><guid isPermaLink="true">https://jonsimpson.ca/key-value-pairs-in-graphql/</guid><description>Key-Value Pairs in GraphQL</description><pubDate>Thu, 20 Jun 2019 03:27:46 GMT</pubDate><content:encoded>&lt;p&gt;Today I was pair programming with a member of my team on a new GraphQL mutation. We were trying to figure out how to represent the returning of data which included a list of key-value pairs – aka a Map datatype. These pairs weren’t constant since they were being returned from a third-party API, so hardcoding the key names in a type wouldn’t work.&lt;/p&gt;
&lt;p&gt;We toyed around with the idea of using an array where the first value would represent the key, and the second value would represent the value. We also wondered if the key-value would best be represented as its own type – that way the array method would never be misconstrued.&lt;/p&gt;
&lt;p&gt;We ended up delaying our decision to choose one method over another by mocking out what the resulting mutation response would look like to the caller. For example, here’s what the response would look like for using arrays to represent the key-value pairs:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;json&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;{&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;  &quot;data&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: {&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;    &quot;fields&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;      [&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;key1&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;value1&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;],&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;      [&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;key2&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;value2&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;],&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;      [&lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;key3&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;value3&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;],&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;    ]&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;  }&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And here’s what the response would look like if a GraphQL type was used for holding key-value pairs:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;json&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;{&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;  &quot;data&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: {&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;    &quot;fields&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;      {&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;&quot;key&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;key1&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;&quot;value&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;value1&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;},&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;      {&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;&quot;key&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;key2&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;&quot;value&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;value2&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;},&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;      {&lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;&quot;key&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;key3&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color:#79B8FF&quot;&gt;&quot;value&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color:#9ECBFF&quot;&gt;&quot;value3&quot;&lt;/span&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;    ]&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;  }&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color:#E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We quickly realized that the array-based method has the disadvantage of the client needing to implicitly know which place in the array the key and value reside. There’s also possibility of more or less than two elements in the array, even though the user would expect there to be only two. GraphQL and its schema provides a concise and explicit contract, and using this array method bypasses this benefit.&lt;/p&gt;
&lt;p&gt;Therefore, we went forth with adding a generic PairType to our GraphQL app. This worked perfectly for our use case.&lt;/p&gt;
&lt;p&gt;But now this begs the question: why doesn’t the GraphQL spec support key-value pairs as a first-class type?&lt;/p&gt;
&lt;p&gt;It appears that it’s a &lt;a href=&quot;https://github.com/graphql/graphql-spec/issues/101&quot;&gt;long standing feature request&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title>Staging environments slow developers down</title><link>https://jonsimpson.ca/staging-environments-slow-developers-down/</link><guid isPermaLink="true">https://jonsimpson.ca/staging-environments-slow-developers-down/</guid><description>Staging environments slow developers down</description><pubDate>Sun, 31 Mar 2019 19:06:02 GMT</pubDate><content:encoded>&lt;p&gt;For businesses to outperform their competitors and bring ideas to the market fast, Software Development has evolved towards a continuous delivery model of shipping small, incremental improvements to software. This method works incredibly well for Software-as-a-Service (SaaS) companies, which can deliver features to their customers as soon as features are fit to release.&lt;/p&gt;
&lt;p&gt;The practice of Continuous Delivery require the master branch to be in a readily shippable state. Thus decreasing the time to ship a change to production encourages faster iteration and smaller, less riskier, changes to be made. Additionally, Continuous Deployment, the shipping of the master branch as soon as changes make it to master, is achievable through a comprehensive suite of automated tests.&lt;/p&gt;
&lt;p&gt;For a development team, keeping this cycle on the order of minutes to tens of minutes is paramount. Slowing down means a slower iteration cycle, therefore resulting in larger and riskier changes being made.&lt;/p&gt;
&lt;p&gt;I have noticed my team slowing down by using our handful of staging servers more often than is necessary.&lt;/p&gt;
&lt;p&gt;Thankfully we can get back to better than we left off and learn a few things along the way!&lt;/p&gt;
&lt;h2 id=&quot;why-we-have-staging-serversenvironments&quot;&gt;Why we have staging servers/environments&lt;/h2&gt;
&lt;p&gt;My team builds the platform for Shopify’s Help Centre and the Merchant facing experience for contacting Support. This same app is also contributed to by our 20 Technical Writers on the Documentation team.&lt;/p&gt;
&lt;p&gt;Technical Writers work alongside the many product teams at Shopify to create and update documentation based on what the product team is building. Part of the process of continuously delivering this documentation is a member of the product team reviewing the changed pages for accuracy.&lt;/p&gt;
&lt;p&gt;This is often achieved through a Technical Writer publishing content to one of a handful of staging servers, then directing the product teams to visit the staging server.&lt;/p&gt;
&lt;p&gt;This workflow makes sense for the most part, since non-technical people can simply visit the staging server to view the unpublished changes. This workflow of having many staging servers isn’t a scalable solution, but that’s for another post.&lt;/p&gt;
&lt;p&gt;An effect of having all of these available staging servers is that developers use them to perform various tasks such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sharing their work for other developers to look at&lt;/li&gt;
&lt;li&gt;Testing out risky changes in a production-like environment&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It can be pretty easy to rationalize slowing down as being more careful, but this is just a fallacy.&lt;/p&gt;
&lt;p&gt;This is an alternative outlook on shipping software since things can go wrong. But when developers are given the freedom to move fast, and are not held down by strict process, most of the time the best risk-reward balance is made. When things do go wrong, having a safety net of tests and production tooling to make it easy to figure out what went wrong, along with the ability to revert back to a previous state. The impact is therefore minimal.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/03/hanson-lu-1418313-unsplash.jpg&quot; alt=&quot;Photo by Hanson Lu on Unsplash&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Photo by &lt;a href=&quot;https://unsplash.com/photos/uoIabPWluyA?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Hanson Lu&lt;/a&gt; on &lt;a href=&quot;https://unsplash.com/search/photos/stage?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Unsplash&lt;/a&gt;&lt;/figcaption&gt;## The Repercussions&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Over the past few months I have observed a number of situations where developers have used staging environments instead of better alternatives.&lt;/p&gt;
&lt;p&gt;One of the biggest slowdowns in iteration cycle is the time to get your code reviewed by someone else. It’s an incredibly important step, but there are shortcuts that can be taken. One of those shortcuts being reviewing code on a staging server.&lt;/p&gt;
&lt;p&gt;It takes way longer to deploy code to a staging server than it does to locally checkout someone’s branch and run the code locally. Getting into the habit of pulling down someones changes, reviewing the code, and performing some exploratory testing with a running instance of the app enables a deeper inspection and understanding of the code.&lt;/p&gt;
&lt;p&gt;Additionally, using staging servers to test out code &lt;em&gt;“because it doesn’t work on my machine”&lt;/em&gt; is an anti-pattern. Developers must prioritize having all features working locally for everyone, at any time, by default. A dysfunctional local development environment just feeds the vicious cycle of more and more things should be tested on staging. Putting the time in to make everything testable in the local development environment pays dividends in speed and developer happiness.&lt;/p&gt;
&lt;h3 id=&quot;how-slow&quot;&gt;How slow?&lt;/h3&gt;
&lt;p&gt;Shipping large, risky changes by vetting that they work on staging first give developers the shortcut to iterate at a slower pace. Here’s a concrete example showing how much extra time it takes to test out code on staging.&lt;/p&gt;
&lt;p&gt;Dev B is reviewing Dev A’s code. Dev B looks over the changeset, and then asks Dev A to put their code up on staging so that they can verify that the code works as expected. Dev A pushes their code to a staging branch, waits for CI to pass, waits for the deploy to succeed, then notifies Dev B that they can test out the changes. Dev B then gets around to going through the steps to verify that the new changes behave as expected. Dev B then finally gives their sign-off on the changeset, or requests further changes. This entire process, mostly spent waiting for builds and CI, can take 30 minutes or more.&lt;/p&gt;
&lt;p&gt;Now lets see what a modified version of the process looks like if Dev B reviews Dev A’s code on their local machine. Dev B looks over Dev A’s changeset, then pulls down the code to their local machine for further inspection. Dev B starts up the app locally and goes through the steps to verify that the new changes behave as expected. Dev B optionally has the ability to poke around the changed code to gain a better understanding of how it fits in with the existing code. Dev B signs-off on the changeset, or requests further changes from Dev A. This process can take 5 minutes or more, but is magnitudes faster than using a staging environment.&lt;/p&gt;
&lt;p&gt;As we can see, the time taken to verify that Dev A’s code works correctly in staging takes at least six times longer on average due to having to wait for code to build, deploys to occur, and even unneeded conversations to coordinate using the staging environment. The same outcome can be performed much faster by replacing many of the steps with faster equivalents. For example, running CI and performing a deploy isn’t needed when running code locally. There’s also no time spent coordinating with Dev A to put their code up on the staging environment.&lt;/p&gt;
&lt;p&gt;There may be perceived speed with using the staging environment to review someone’s changes, but this is only a fallacy. Dev B may think: &lt;em&gt;“If I just need to visit the staging environment to review Dev A’s code, then I save myself time from having to stash my local changes, pull down the code, and start the app.”&lt;/em&gt; Correct, this saves Dev B’s time, but overall causes Dev A to take more of a hit to their time. Dev A has to push their code up to the staging env, causing CI to run, a deploy to occur, then notify Dev B to take a look tens of minutes later.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/03/ruslan-keba-1429285-unsplash.jpg&quot; alt=&quot;Photo by Ruslan Keba on Unsplash&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Photo by &lt;a href=&quot;https://unsplash.com/photos/_WHjtsrYYXQ?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Ruslan Keba&lt;/a&gt; on &lt;a href=&quot;https://unsplash.com/search/photos/stage?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Unsplash&lt;/a&gt;&lt;/figcaption&gt;## Where staging environments make sense&lt;p&gt;&lt;/p&gt;
&lt;p&gt;With all hardfast rules there are some exceptions. One of those exceptions is to validate new configuration for production systems. For example, since it’s not simple to run a local Kubernetes cluster, it’s safer to verify risky changes to &lt;a href=&quot;https://kubernetes.io/docs/concepts/workloads/controllers/deployment/&quot;&gt;Kubernetes Deployment config files&lt;/a&gt; by using a production like environment: staging.&lt;/p&gt;
&lt;p&gt;Another example is where lives or the wellbeing of people can be on the line. An example of this would be developing a payment processing service where breaking things could result in financial consequences for users of the system. Even a voting system would be an example of a critical system where it’s necessary to take the time to make sure everything is working correctly.&lt;/p&gt;
&lt;h2 id=&quot;antipatterns&quot;&gt;Antipatterns&lt;/h2&gt;
&lt;p&gt;Chatting with another developer about this blog post, I asked for some examples as to what kinds of things they use their staging environment for.&lt;/p&gt;
&lt;p&gt;One example was verifying that updating UI component libraries looked the same between development and production. Since there’s no real good way to test that the UI doesn’t look broken, it’s quite a manual process to verify the many screens and states look fine. One gotcha that was mentioned was that the production build of the Javascript and CSS assets can be different from the development build. This of course means that there is a difference between development and production, which means that bugs can slip through and get to their users.&lt;/p&gt;
&lt;p&gt;Off the top of my head a few suggestions came to mind. One idea was to make development more like the production environment (however that may be). During the testing process create a production build of the Javascript and CSS assets locally and use that to verify that the UI looks fine. Lastly, if possible make smaller changes that are easier to review and reason about.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/03/romain-hus-1239943-unsplash.jpg&quot; alt=&quot;Photo by Romain Hus on Unsplash&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Photo by &lt;a href=&quot;https://unsplash.com/photos/p6RnvFCqiJY?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Romain Hus&lt;/a&gt; on &lt;a href=&quot;https://unsplash.com/?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Unsplash&lt;/a&gt;&lt;/figcaption&gt;## Dark launching new functionality&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Shipping to production can have a certain amount of risk. A code change could crash the app, break a feature, or even cause a worse user experience. What if we could ship to production and drastically reduce these risks?&lt;/p&gt;
&lt;p&gt;Let’s talk about dark launching new features and changes. Dark launching is the practice of shipping new code to production, but hiding it from most users to prevent accidentally breaking things or negatively affecting the user’s experience. This could be implemented a number of different ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using the new logic if a special parameter is added to the page’s URL&lt;/li&gt;
&lt;li&gt;A special cookie set in the user’s browser to enable the new logic&lt;/li&gt;
&lt;li&gt;A/B testing of the current and new logic&lt;/li&gt;
&lt;li&gt;Enabling the new logic only for employees&lt;/li&gt;
&lt;li&gt;A beta flag that can turn on and off the logic at runtime&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, my team is building out a new search backend. The team is able to ship small and incremental changes for this project without having to worry about breaking any of the existing search functionality. For the existing frontend code to integrate into the new backend code, the team is using URL parameters to dark launch this new search backend in production. This gives us great confidence of the new search backend will work since it’s being continually tested in production. Additionally, we’ll be using an A/B test to verify that the new search backend is better than the existing search backend according to our success metrics.&lt;/p&gt;
&lt;p&gt;Dark launching new functionality is another pattern that removes the need for staging environments. It does take some thought to figure out the best way to toggle on or off the new functionality, but when used well dark launching can minimize the impact of new code breaking production.&lt;/p&gt;
&lt;h2 id=&quot;immediate-improvements&quot;&gt;Immediate improvements&lt;/h2&gt;
&lt;p&gt;Later that day after convincing my team that staging servers were holding us back, one of our developers wasn’t able to test out our ticket submission form locally since it depended on another service to be running. Our app was missing the proper local development credentials to connect to this other service.&lt;/p&gt;
&lt;p&gt;A few Slack messages later with the team resulted in a combined effort to fix the local development environment. One change to the local development environment made developing locally as simple if not simpler than using the staging environment.&lt;/p&gt;
&lt;p&gt;Two months later the team is able to hold themselves to not using any of the staging environments. There have been a few times where the idea of making an exception has come up. I talked them off the ledge by suggesting to make less riskier changes by splitting things up into smaller pull requests, and even dark launching their feature.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/03/jodie-walton-502797-unsplash.jpg&quot; alt=&quot;Photo by Jodie Walton on Unsplash&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Photo by &lt;a href=&quot;https://unsplash.com/photos/thm-jF3JR3w?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Jodie Walton&lt;/a&gt; on &lt;a href=&quot;https://unsplash.com/?utm_source=unsplash&amp;#x26;utm_medium=referral&amp;#x26;utm_content=creditCopyText&quot;&gt;Unsplash&lt;/a&gt;&lt;/figcaption&gt;## Recommendations&lt;p&gt;&lt;/p&gt;
&lt;p&gt;If I have convinced you on &lt;em&gt;staging servers being used too much for the wrong purposes&lt;/em&gt;, or are taking my more extreme view of &lt;em&gt;just don’t use staging servers&lt;/em&gt;, here is some practical advice to move towards these goals if you’re not there already.&lt;/p&gt;
&lt;p&gt;Start with thinking about yourself. From the features, projects, and bugfixes that have been shipped by yourself over the past few months, which have used a staging server to verify that they’ll work correctly in production? If there have been any, ask yourself what the reason was for having to use the staging server.&lt;/p&gt;
&lt;p&gt;Take those reasons and figure out if each one could have been prevented by one or a combination of the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If the local development environment was more like production I could have avoided using staging&lt;/li&gt;
&lt;li&gt;If the code change could have been dark launched to production I could have avoided using staging&lt;/li&gt;
&lt;li&gt;If we had more confidence with our tests catching regressions then I could have avoided using staging&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some of the improvements that can be made to limit the amount of times staging servers are used can seem like a lot of work. But think of this from a different perspective: how much time is wasted due to these inefficiencies being here?&lt;/p&gt;</content:encoded></item><item><title>Brodie: Building Shopify&apos;s new Help Centre</title><link>https://jonsimpson.ca/brodie-building-shopifys-new-help-centre/</link><guid isPermaLink="true">https://jonsimpson.ca/brodie-building-shopifys-new-help-centre/</guid><description>Brodie: Building Shopify&apos;s new Help Centre</description><pubDate>Mon, 14 Jan 2019 01:11:50 GMT</pubDate><content:encoded>&lt;p&gt;One of the primary projects which has defined the existence of my team at Shopify was a complete rebuild of the &lt;a href=&quot;https://help.shopify.com&quot;&gt;Help Centre’s&lt;/a&gt; platform. The prior Help Centre utilized &lt;a href=&quot;https://jekyllrb.com/&quot;&gt;Jekyll&lt;/a&gt; (the static site generator) with a number of features added over the past five years to provide documentation to our merchants, partners, and prospective customers.&lt;/p&gt;
&lt;p&gt;The rebuild took about six months, and successfully launched with multiple languages in July 2018.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/01/brodie-portrait.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Deacon Brodie&lt;/figcaption&gt;This post will first discuss the limitations we encountered with using Jekyll for a number of years on a Help Centre which has grown to 15 technical writers and 1600 pages. Next, a number of upcoming features are outlined which the new platform should easily accommodate for. Following that, a high level overview of Brodie, the library we built to replace Jekyll. Next, Brodie’s internals are explained with details on how it integrates with Ruby on Rails. This post then ends with links to related code discussed throughout this post.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;jekylls-limitations&quot;&gt;Jekyll’s Limitations&lt;/h2&gt;
&lt;p&gt;As of February 2018, Shopify’s Help Centre consisted of 1600 pages, 3000 images, and 300 partials/includes. This amount of content can really slow down Jekyll’s build time. A clean build takes 80 seconds, while changing a single character on a page requires 15 seconds for a partial rebuild. This really slows down the workflow for our technical writers, as well as developers who maintain the heavy Javascript-based Support page.&lt;/p&gt;
&lt;p&gt;Static sites, where a server serves up html files, can only get you so far. Features considered &lt;em&gt;dynamic&lt;/em&gt; must be implemented using client-side Javascript. This has proven to be difficult and even restrictive to the features that could be added to the site, especially when it comes to features which require running on a server and not in the user’s browser. Things such as authenticating Shopify merchants before they contact Support is more difficult considering that all of the functionality lives in Javascript, or another app is relied upon.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/01/deacon-brodie-tavern.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The original Deacon Brodie’s Tavern in Edinburgh&lt;/figcaption&gt;Even other companies have blogged about the hoops they’ve jumped through to &lt;a href=&quot;https://www.smashingmagazine.com/2016/08/using-a-static-site-generator-at-scale-lessons-learned/&quot;&gt;scale Jekyll&lt;/a&gt; too.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;upcoming-features&quot;&gt;Upcoming Features&lt;/h2&gt;
&lt;p&gt;Allowing users to login to the Help Centre with their Shopify credentials can provide a more personalized experience. Based on the shops the Merchant has access to, the pages in the Help Centre can be tailored to their Country, the features that they use, and the growth stage of their business. The API documentation can be extended to provide the logged in user the ability to query their shop’s API.&lt;/p&gt;
&lt;p&gt;Enabling the ability for merchants to login to the Help Centre can simplify the process of talking with Support. Once logged in, users would be able to bypass verifying who they are to a Support representative, since they’ve already proven who they are by logging into the Help Centre. This saves time on both ends of the conversation and keeps the user focused on their problem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/01/geograph-1339897-by-kim-traynor-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;A short history of Deacon Brodie’s life&lt;/figcaption&gt;Features could also be added to enhance the workflow of our technical writers. As a logged in technical writer a few features could be enabled such as showing all pages regardless of being hidden or being an early-release feature, a link to view the page on GitHub, or even a link to view the current page in Google Analytics. Improvements such as these make it much quicker to access to relevant data.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Being able to correlate the Help Centre pages visited by a user before they contact Support can help infer how successful pages are at helping answer the user’s question. Pages which do poorly can be updated, and pages which are successful can be studied for trends. Resources can be better focused on areas of the Help Centre pages which need it. Additionally, combining the specific pages visited to Support interactions opens the opportunity to perform A/B tests. A Help Centre page can have two or more versions, and the version which results in the least amount of Support interactions could be considered the winning version. Currently there is no way to definitively correlate the two.&lt;/p&gt;
&lt;p&gt;Many Support organizations gauge the effectiveness of their Help Centre content (self-help) by comparing potential Support interactions solved by Help Centre pages to the number of actual Support interactions. A so called &lt;em&gt;deflection ratio&lt;/em&gt;, where the higher the self-help-to-support-interaction ratio the better. This ratio can be more accurately calculated by better tracking of the user’s journey through these various Shopify properties before they contact Support.&lt;/p&gt;
&lt;p&gt;Lastly, &lt;a href=&quot;https://en.wikipedia.org/wiki/Internationalization_and_localization&quot;&gt;Internationalization (aka I18n) and Localization&lt;/a&gt; means translating pages into different languages and cultural norms. I18n would enable the Help Centre to be used by people other than those who know English, or prefer reading in a language they understand better. I18n support can be hacked into Jekyll, but as was discussed earlier with 1600 pages already slowing down the build times, Jekyll will absolutely cripple when there exists multiple localized versions of each page. Therefore, having an app that can scale to a much larger number of pages is required for I18n and localization to even be considered.&lt;/p&gt;
&lt;h2 id=&quot;the-solution&quot;&gt;The Solution&lt;/h2&gt;
&lt;p&gt;To enable our Help Centre to scale way past 1600 pages, and support complex server-side features, a scrappy team was formed to rebuild the Help Centre platform in Ruby on Rails.&lt;/p&gt;
&lt;p&gt;Rewriting any of the content pages or partials wouldn’t be feasible for the time or resources we had – therefore maintaining compatibility with the existing content files was paramount.&lt;/p&gt;
&lt;p&gt;Allowing the number of pages in the Help Centre to keep growing, but to dramatically reduce the 80 second clean build time, and the 15 second page rebuild time requires an architectural shift. Moving away from Jekyll’s model of pre-rendering all pages at build time to the model of rendering only what’s needed at request time. Instead of performing all computational work up-front, performing smaller batches of work at request time spreads out the cost.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/01/deacon-brodies-ottawa.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The Deacon Brodies Pub in Ottawa, steps away from Shopify HQ&lt;/figcaption&gt;Ruby on Rails was chosen as the new technology stack for the Help Centre for a few reasons. The limits were being reached with Jekyll, therefore we technically couldn’t continue using it. Shopify’s internal tooling and production systems heavily integrate with Rails applications, therefore building on Rails to utilize these would save a lot of developer time. Shopify also employs the largest base of Rails developers, so tapping into that workforce and knowledge base is very beneficial for future development.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Ruby on Rails brings a number of complementary features such as a solid MVC framework, simple caching abstractions for application code and views, as well as a strong and healthy community of libraries and users. These benefits make Rails a great selling point for building new features faster and easier than the prior Jekyll system.&lt;/p&gt;
&lt;p&gt;One of the things that has been working quite well over the past few years has been the workflow for our technical writers. It consists of using a text editor (such as Atom) to edit Markdown and Liquid code, then using Git and GitHub to open a Pull Request for peer review of the changes. Automated tests check for broken links, missing images, incorrectly formed HTML and Javascript. Once the changes are approved and all tests have passed, the Pull Request is merged and shipped to production.&lt;/p&gt;
&lt;p&gt;Since there isn’t a good reason to change the technical writer’s current workflow we’re more than happy to design the new documentation site with the existing workflow in mind.&lt;/p&gt;
&lt;p&gt;One of the main features of the platform my team built was the flexible content rendering engine. It’s equivalent to Jekyll on steroids. Here I’ll discuss the heart of the system, Brodie, the ERB-Liquid-Markdown rendering engine.&lt;/p&gt;
&lt;h2 id=&quot;brodie&quot;&gt;Brodie&lt;/h2&gt;
&lt;p&gt;Brodie is the library we’ve purpose-built for Shopify’s new Help Centre. It renders any file that contains ERB, Liquid, and Markdown, or a combination of the three into HTML.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/01/deacon-brodie-portrait.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Brodie is named after &lt;a href=&quot;https://www.deaconbrodiespub.com/&quot;&gt;Deacon Brodies&lt;/a&gt;, an Ottawa pub which is itself named after Deacon &lt;a href=&quot;https://en.wikipedia.org/wiki/William_Brodie&quot;&gt;William Brodie&lt;/a&gt;, an 18th-century city councillor in Edinburgh who moonlighted as a burglar and gambler.&lt;/p&gt;
&lt;p&gt;Deacon Brodie’s double life inspired the Robert Louis Stevenson story &lt;a href=&quot;https://en.wikipedia.org/wiki/Strange_Case_of_Dr_Jekyll_and_Mr_Hyde&quot;&gt;Strange Case of Dr Jekyll and Mr Hyde&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Brodie, and the custom extensions built on-top of it, enable a smooth transition from Jekyll to Rails. Shopify’s 1600 pages, 3000 images, and 300 partials/includes can be rendered by Brodie without modification. Additionally, the workflow of the technical writers is not disturbed. They continue to use their favourite text editor to modify content files, Git and GitHub to perform reviews, and to utilize the existing Continuous Delivery pipeline for fast validation and shipping.&lt;/p&gt;
&lt;p&gt;Views in Rails are rendered using &lt;em&gt;templates&lt;/em&gt;. A template is a file that consists of code that defines what the user will see. In a Rails app the template file will usually consist of ERB mixed into HTML. A template file like this would belong in the &lt;code&gt;app/views/&lt;/code&gt; directory and would have a descriptive name such as &lt;code&gt;homepage.html.erb&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The magic in Rails treats templates differently based on its filename. Let’s break it down. &lt;code&gt;homepage&lt;/code&gt; represents the template’s filename. Rails knows to look for this template based on this name. The &lt;code&gt;html&lt;/code&gt; part represents what the format the template should output to. Lastly, &lt;code&gt;erb&lt;/code&gt; is the portion which specifies what language the template file is written in. This naming convention enables Rails to dynamically render views just by looking at the filename.&lt;/p&gt;
&lt;p&gt;Rails provides template handlers to render ERB to HTML, as well as JSON and a few others. Rails offers the ability to extend its rendering system by plugging in new template handlers. This is where Brodie integrates with Rails applications. Brodie provides its own template handler to take content files and convert the ERB, Liquid, and Markdown to HTML.&lt;/p&gt;
&lt;p&gt;Rails exposes this via the &lt;code&gt;ActionView::Template.register_template_handler(:md, Content)&lt;/code&gt; where &lt;code&gt;:md&lt;/code&gt; is the file extension to act on, and &lt;code&gt;Content&lt;/code&gt; is the Class to use as the template rendering engine (template handler). Next we’ll go over how a template handler works.&lt;/p&gt;
&lt;h2 id=&quot;rendering-templates&quot;&gt;Rendering Templates&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2019/01/brodie-first-interview.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;The only interface a template handler is required to respond to is &lt;code&gt;call&lt;/code&gt; with one parameter being the &lt;code&gt;template&lt;/code&gt; to render. This method should return a string of code that will render the view. This string will be &lt;code&gt;eval&lt;/code&gt;‘ed by the template later. Returning a string of code is a Rails optimization which inlines much of the code required to render the template. This reduces the number of methods needing to be called, speeding up the already time consuming rendering process.&lt;/p&gt;
&lt;p&gt;When Rails needs to render a view it takes the specified template and &lt;a href=&quot;https://github.com/rails/rails/blob/e5926a3c44a2666bd09bdae21a273e899074d8f1/actionview/lib/action_view/template.rb#L283&quot;&gt;calls the proper template handler on itself&lt;/a&gt;. The handler returns a string that contains the code that renders the template. The Template class &lt;a href=&quot;https://github.com/rails/rails/blob/e5926a3c44a2666bd09bdae21a273e899074d8f1/actionview/lib/action_view/template.rb#L287-L293&quot;&gt;combines&lt;/a&gt; the code with other code, then &lt;a href=&quot;https://github.com/rails/rails/blob/e5926a3c44a2666bd09bdae21a273e899074d8f1/actionview/lib/action_view/template.rb#L309&quot;&gt;evals&lt;/a&gt; the stringified code.&lt;/p&gt;
&lt;p&gt;For example, the ERB-Liquid-Markdown renderer has a &lt;code&gt;call&lt;/code&gt; method like the following:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;def call(template)&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;  compiled_source = erb_handler.call(template)&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;  &quot;Brodie::Handlers::Content.render(begin;#{compiled_source};end, local_assigns)&quot;&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt; end&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Brodie first renders the ERB present in the template’s content with the existing ERB handler that comes with Rails. Brodie then returns a string of code which calls the “render” method on itself. That render method is shown next:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;def render(source, local_assigns = {})&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;  markdown.call(&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;    liquid.call(source, local_assigns)&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;  ).html_safe&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;end&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Here is where the actual rendering of the Liquid and Markdown occur. When this code is &lt;code&gt;eval&lt;/code&gt;‘ed the parameter &lt;code&gt;local_assigns&lt;/code&gt; is included for passing in variables when rendering a view. This is how variables are magically passed from Rails controllers into views.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/images/2019/01/jekyll-to-rails-comparison.png&quot;&gt;&lt;img src=&quot;/static/images/2019/01/jekyll-to-rails-comparison.png&quot; alt=&quot;&quot;&gt;&lt;/a&gt;&lt;/p&gt;&lt;figcaption&gt;Left: The old Jekyll site. Right: The new Rails site. The Help Centre rebuild looks the same but has a completely new backend&lt;/figcaption&gt;It’s as straightforward as that for rendering ERB, Liquid, and Markdown together. The early days of Brodie were spent understanding the ins-and-outs of ActiveView enough to validate that this approach was a sane practice and not breaking in edge cases.&lt;p&gt;&lt;/p&gt;
&lt;h2 id=&quot;further-reading&quot;&gt;Further Reading&lt;/h2&gt;
&lt;p&gt;The current documentation is really limited when it comes to &lt;a href=&quot;http://api.rubyonrails.org/classes/ActionView/Template.html&quot;&gt;Templates&lt;/a&gt; and &lt;a href=&quot;http://api.rubyonrails.org/classes/ActionView/Template/Handlers.html&quot;&gt;Template Handlers&lt;/a&gt;. I would suggest building a small template handler, setting breakpoints and walk through the source. &lt;a href=&quot;https://gist.github.com/davidjrice/3014948&quot;&gt;Here’s a great example of a template handler for Markdown&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Additionally, looking over the source code and comments is the best way to get an understanding of the ActiveView internals. The main entry point into ActiveView is the &lt;code&gt;render&lt;/code&gt; method from &lt;a href=&quot;https://github.com/rails/rails/blob/master/actionview/lib/action_view/renderer/template_renderer.rb&quot;&gt;TemplateRenderer.&lt;/a&gt; &lt;a href=&quot;https://github.com/rails/rails/blob/master/actionview/lib/action_view/template.rb&quot;&gt;Template&lt;/a&gt; would be best to check out next as it concerns itself with actually rendering templates. Lastly, &lt;a href=&quot;https://github.com/rails/rails/blob/master/actionview/lib/action_view/template/handlers.rb&quot;&gt;Handlers&lt;/a&gt; would be good to check out to see how Rails can register and fetch Template Handlers.&lt;/p&gt;</content:encoded></item><item><title>Keep Continuously Testing</title><link>https://jonsimpson.ca/keep-continuously-testing/</link><guid isPermaLink="true">https://jonsimpson.ca/keep-continuously-testing/</guid><description>Keep Continuously Testing</description><pubDate>Sun, 30 Dec 2018 22:48:18 GMT</pubDate><content:encoded>&lt;p&gt;One of the more powerful tools on my toolbelt is a red-green development cycle. With this cycle of purposefully writing tests which fail (red), then writing the code required to make them pass (resulting in green tests), allows me to keep my mind focused on the current task at hand, and saves me time.&lt;/p&gt;
&lt;p&gt;For example, take the annual &lt;a href=&quot;https://adventofcode.com/&quot;&gt;Advent of Code&lt;/a&gt; set of challenges: given a problem and example input with output, create working code which solves the problem for any given input.&lt;/p&gt;
&lt;p&gt;Small challenges such as Advent of Code provide the perfect scenario for &lt;a href=&quot;https://en.wikipedia.org/wiki/Test-driven_development&quot;&gt;Test Driven Development&lt;/a&gt; (TDD). Given an Advent of Code challenge, I would always write a test which asserts some output. Since those tests would fail, I would then proceed to write code which returns the correct answer.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/12/30-15-vs7t1-mc1tv.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;My optimal TDD coding setup. Clockwise, from left: source file, test file, automatic test output, interactive Ruby console.&lt;/figcaption&gt;&lt;strong&gt;I can’t emphasize the time saved from automatically running my tests whenever any file is saved in my working directory.&lt;/strong&gt; The time savings has payed off time and time again because it has saved me a number of keystrokes off of running tests manually each time. Saving my source or test file automatically causes my tests to start running in another terminal pane.&lt;p&gt;&lt;/p&gt;
&lt;p&gt;The one way I’m able to accomplish this is by utilizing two tools: &lt;code&gt;ag&lt;/code&gt; and &lt;code&gt;entr&lt;/code&gt;. These two shell tools used in combination allow for a powerful red-green development process.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ag&lt;/code&gt; (commonly known as the Silver Searcher, succeeded by &lt;code&gt;rg&lt;/code&gt; (RipGrep)) is a program which lists files recursively. &lt;code&gt;entr&lt;/code&gt; is a program which runs a given shell command whenever a file is modified. These two tools piped together like &lt;code&gt;ag -l | entr -c rails test&lt;/code&gt; enable a TDD workflow because &lt;code&gt;ag&lt;/code&gt; provides a recursive list of files, then &lt;code&gt;entr&lt;/code&gt; watches those files and runs the provided command. In this case, the command to run every time is &lt;code&gt;rails test&lt;/code&gt;. The &lt;code&gt;-c&lt;/code&gt; is a parameter for &lt;code&gt;entr&lt;/code&gt; to clear the console each time the command is rerun.&lt;/p&gt;
&lt;h3 id=&quot;download-ag-and-entr&quot;&gt;Download &lt;code&gt;ag&lt;/code&gt; and &lt;code&gt;entr&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;I would highly recommend trying out these two tools in combination next time you encounter your next TDD situation – it’s certainly saved me a lot of time repeating keyboard commands. Even if you don’t do TDD regularly or at all, I would recommend trying out these new tools.&lt;/p&gt;
&lt;p&gt;Download &lt;code&gt;ag&lt;/code&gt; from &lt;a href=&quot;https://github.com/ggreer/the_silver_searcher#installing&quot;&gt;Github&lt;/a&gt;, or equivalently, &lt;code&gt;rg&lt;/code&gt; (&lt;a href=&quot;https://github.com/BurntSushi/ripgrep#installation&quot;&gt;RipGrep&lt;/a&gt;) if you’re feeling up to the challenge. &lt;code&gt;entr&lt;/code&gt; can also be fetched from &lt;a href=&quot;https://github.com/clibs/entr#event-notify-test-runner&quot;&gt;Github&lt;/a&gt;.&lt;/p&gt;
&lt;h4 id=&quot;for-a-quicker-install&quot;&gt;For a quicker install&lt;/h4&gt;
&lt;p&gt;&lt;code&gt;brew install the_silver_searcher entr&lt;/code&gt; on Mac OS.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apt-get install silversearcher-ag entr&lt;/code&gt; on Ubuntu-like systems.&lt;/p&gt;</content:encoded></item><item><title>Why “&amp;amp;” doesn&apos;t actually break your HTML URLs</title><link>https://jonsimpson.ca/why-amp-doesnt-actually-break-your-html-urls/</link><guid isPermaLink="true">https://jonsimpson.ca/why-amp-doesnt-actually-break-your-html-urls/</guid><description>Why “&amp;amp;” doesn&apos;t actually break your HTML URLs</description><pubDate>Sat, 10 Nov 2018 17:34:30 GMT</pubDate><content:encoded>&lt;p&gt;Writing tests for some code which generated HTML ended up surfacing one peculiarity with how HTML encodes URLs. The valid URL &lt;code&gt;https://example.com?a=b&amp;#x26;c=d&lt;/code&gt; would always get modified when inserted into HTML like so: &lt;code&gt;&amp;#x3C;a href=&quot;https://example.com?a=b&amp;#x3C;strong&gt;&amp;#x26;amp;&amp;#x3C;/strong&gt;c=d&quot;&gt;foo&amp;#x3C;/a&gt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;One of my teammates commented on this during a code review – why the &lt;code&gt;&amp;#x26;&lt;/code&gt; character is converted to &lt;code&gt;&amp;#x26;amp;&lt;/code&gt; in the resulting HTML. That URL didn’t look right since the &lt;code&gt;&amp;#x26;amp;&lt;/code&gt; would break the URL query string.&lt;/p&gt;
&lt;p&gt;Even more confusing was that the HTML in the URL still worked since Google Chrome and other browsers converted the URL in the HTML from its &lt;code&gt;&amp;#x26;amp;&lt;/code&gt; form back to &lt;code&gt;&amp;#x26;&lt;/code&gt;. Were the browsers just being helpful by handling these developer mistakes, much like it already does with closing missing HTML elements?&lt;/p&gt;
&lt;h3 id=&quot;the-fake-bug-hunt&quot;&gt;The fake bug hunt&lt;/h3&gt;
&lt;p&gt;Over two hair pulling days of reading &lt;a href=&quot;https://github.com/sparklemotion/nokogiri/issues/1127&quot;&gt;GitHub issues&lt;/a&gt;, &lt;a href=&quot;https://stackoverflow.com/questions/8580803/how-can-i-put-a-string-with-an-ampersand-in-an-xml-file-with-nokogiri&quot;&gt;StackOverflow&lt;/a&gt;, &lt;a href=&quot;https://html.spec.whatwg.org/multipage/syntax.html#syntax-ambiguous-ampersand&quot;&gt;HTML standards&lt;/a&gt;, &lt;a href=&quot;https://github.com/sparklemotion/nokogiri/blob/master/lib/nokogiri/xml/node.rb&quot;&gt;source code&lt;/a&gt;, and more, it was clear that there was a clear divide in understanding. One group of people who understood this as a bug in their library of choice and another group who understood that this wasn’t a bug.&lt;/p&gt;
&lt;p&gt;I was definitely in the former group of people until I finally found a &lt;a href=&quot;https://mrcoles.com/blog/how-use-amersands-html-encode/&quot;&gt;helpful blog post clearing up the confusion&lt;/a&gt;. Even &lt;a href=&quot;https://stackoverflow.com/questions/6322562/using-amp-in-url-bugs-up-the-get/6322572#6322572&quot;&gt;this StackOverflow answer&lt;/a&gt; concisely summed why this is, in a few quick sentences.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Simply stated, lone &lt;code&gt;&amp;#x26;&lt;/code&gt; characters in HTML are invalid and must be escaped to &lt;code&gt;&amp;#x26;amp;&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Since HTML is a descendant of XML, HTML implements a subset of XML’s rules. One of those rules is that a &lt;code&gt;&amp;#x26;&lt;/code&gt; character by itself is invalid since &lt;code&gt;&amp;#x26;&lt;/code&gt; is used as an escape character for a &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references&quot;&gt;character entity reference&lt;/a&gt; (eg. &lt;code&gt;&amp;#x26;amp;&lt;/code&gt;, &lt;code&gt;&amp;#x26;lt;&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;The confusion arises when people don’t know that this rule exists. Many, including myself, was blaming it on their HTML parsing libraries such as &lt;a href=&quot;https://github.com/sparklemotion/nokogiri&quot;&gt;Nokogiri&lt;/a&gt; and &lt;a href=&quot;http://xmlsoft.org/&quot;&gt;libxml2&lt;/a&gt;. Others blamed their web app of choice since it sends them invalid HTML or XML that their HTML parser doesn’t know how to deal with.&lt;/p&gt;
&lt;h3 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Another way of understanding the same problem is that a URL on its own has its own rules around which characters must be encoded. HTML also has different encoding rules. So when URLs are used in HTML, the URL may look invalid, but given that it is in HTML, HTML has its own rules around what characters need escaping. This can lead to funky looking URLs, but rest assured that using a HTML parsing library or a browser will properly encode and decode any sort of data stored within HTML.&lt;/p&gt;
&lt;p&gt;This explains why our browsers see &lt;code&gt;&amp;#x26;amp;&lt;/code&gt; in the raw HTML and know to convert it back to &lt;code&gt;&amp;#x26;&lt;/code&gt;. This also confirms that it is completely fine seeing &lt;code&gt;&amp;#x26;amp;&lt;/code&gt; characters in tests comparing HTML.&lt;/p&gt;</content:encoded></item><item><title>Twenty-four!</title><link>https://jonsimpson.ca/twenty-four/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-four/</guid><description>Twenty-four!</description><pubDate>Tue, 02 Oct 2018 01:46:07 GMT</pubDate><content:encoded>&lt;p&gt;It’s hard to come up with the content for this post while fending off a sickness, but I know it’s a yearly ceremony for myself to look back and reflect on the prior year. As always, what better of a time to do this than on my Birthday!&lt;/p&gt;
&lt;p&gt;One word can really describe my primary focus over the past year: Career. This time last year I was just about to pass the 90 day mark at Shopify.&lt;/p&gt;
&lt;p&gt;Whether it’s been building close and trustworthy friendships with teammates and other colleagues, levelling up new hires through mentoring, or continually delivering impactful work – this year has been nothing more than exemplary.&lt;/p&gt;
&lt;p&gt;Let’s get right into things! Since this time last year I moved to downtown Ottawa and am now living without roommates. Crazy to think that it only happened 10 months ago since it feels like forever, but I am enjoying all the perks of having no roommates and downtown life.&lt;/p&gt;
&lt;p&gt;This summer was one of my most active to date. There was always something going on during the week or weekend from July straight through August. I had the opportunity to travel with a friend to his home town of Fredericton, New Brunswick. For the first time being on the east-coast I was expecting an east-coast accent out of everyone, but the place seemed more like Ontario than not. It was a great time hanging out with his friends and attending a party at the local hotel.&lt;/p&gt;
&lt;p&gt;I had a great time with friends at two music festivals: Bluesfest and Escapade. Escapade was especially fun since there was a number of great acts: Alesso, Tchami, Zedd, and Kaskade. One private festival I went to was about 15 people camping at a buddies lakeside property in Quebec. A DJ booth was set up and the trance music was going on late into the night.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/10/escapade.gif&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Being so close to downtown has its perks – I was within walking distance to both festivals.&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Another bunch of cool moments were centred around exploring other Shopify offices. Barrel Hall in Waterloo was the coolest looking since it was once a distillery. It still has all of the characteristic aging barrels and wooden structure. Montreal’s office has the best artwork and looks like the most liveable city. Toronto’s offices and the city in general was a grand party since it’s mostly new to me, but I have friends, family, and colleagues everywhere.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/10/jon-barrel-hall.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Barrel Hall, Shopify’s Waterloo Office, definitely had the most character.&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Even though I didn’t do as much cycling as last year, this year I took advantage of Ottawa’s Sunday Bikedays. On Sundays throughout the summer certain parkways were closed for the morning. This allowed for coasting down some long stretches of smooth roadway. The midpoint for some of these outings were spent taking a break at a local brewpub. Some friends joined me every once in a while too!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/10/ottawa-river-sunset.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;One of the many destinations – where the Rideau Canal meets the Ottawa River.&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Investing and personal finance has become a hobby of mine and ever more important as I get older. It’s better earlier than later to learn about the do’s and don’ts of personal finance. A year of listening to related podcasts, plenty of reading, and managing my savings has enabled me to go from zero to pretty competent. I’m lucky to have a representative group of people to bounce ideas and plans off of.&lt;/p&gt;
&lt;p&gt;I went to my first conference, BSides Ottawa, which was quite fun. I met a number of colleagues and played in my first capture the flag event. I found out that I can defend, but am not too good at attacking. I’ll try again to attend this year!&lt;/p&gt;
&lt;p&gt;December was when myself and a few others started a rewrite for Shopify’s Help Centre. Unknown to us, there was quite a lot of feature creep – either from “that one little feature that’s existed forever”, to adding multiple language support. This resulted in the project taking seven months, but we’re glad to have done it. Throughout the process we started and built our own kick-ass team. When the rewrite shipped it went off without a hitch! 🎇 Now all of our current projects hinge off of the benefits that this rewrite brought.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/10/montreal-team.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Some of the team which traveled to Montreal to launch the Help Centre.&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;I attended a few training sessions that should benefit my career – Visualizing Software Architecture with the C4 Model, as well as Agile Scrum training. The latter has definitely transformed my team for the better.&lt;/p&gt;
&lt;p&gt;There was a lot of various work events – planned or unplanned, official or unofficial – which I’m pretty grateful to have experienced with friends and colleagues. Alas, there are too many to mention.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/10/marching_band.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;For example: that one time we had a marching band…&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Here’s to another year of learning, growth, exploration, and good times.&lt;/p&gt;</content:encoded></item><item><title>Hunting for segmentation faults in Ruby programs</title><link>https://jonsimpson.ca/hunting-for-segmentation-faults-in-ruby-programs/</link><guid isPermaLink="true">https://jonsimpson.ca/hunting-for-segmentation-faults-in-ruby-programs/</guid><description>Hunting for segmentation faults in Ruby programs</description><pubDate>Mon, 30 Jul 2018 13:01:35 GMT</pubDate><content:encoded>&lt;p&gt;I was working on building a content management engine for &lt;a href=&quot;https://help.shopify.com&quot;&gt;Shopify’s next generation Help Centre&lt;/a&gt;. Code named Brodie, it was equivalent to the &lt;a href=&quot;https://jekyllrb.com&quot;&gt;Jekyll&lt;/a&gt; static site generator added to Ruby on Rails, but instead of rendering all the pages up front at compile time, each page is generated when it is requested by the client.&lt;/p&gt;
&lt;p&gt;Brodie used a Ruby Gem called &lt;a href=&quot;https://github.com/vmg/redcarpet&quot;&gt;Redcarpet&lt;/a&gt; for the Markdown rendering. Redcarpet worked wonderfully, but Brodie ended up having a severe bug due to the extensive usage of it. The way Redcarpet was being used in Brodie resulted in periodic &lt;a href=&quot;https://en.wikipedia.org/wiki/Segmentation_fault&quot;&gt;segmentation faults&lt;/a&gt; (segfaults) while rendering Markdown. These segfaults were causing many 502 and 503 errors when some unknown pages were being visited. It was such an issue that all the web servers in the cluster would go down for some time until they restarted automatically.&lt;/p&gt;
&lt;h3 id=&quot;how-do-i-redcarpet&quot;&gt;How do I Redcarpet?&lt;/h3&gt;
&lt;p&gt;To better explain the issue and its resolution, it is best to have an understanding of how Redcarpet, and really any other text renderer works. Here is a simple example:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/00a0c3a837574c1d121224234fd15c83.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;In the above example, the code defines the Markdown that is to be rendered to HTML, sets up the &lt;code&gt;Redcarpet::Markdown&lt;/code&gt; configuration object, and then finally parses and renders the Markdown to HTML.&lt;/p&gt;
&lt;p&gt;But wait! There’s more. Jekyll and Brodie both use the &lt;a href=&quot;https://shopify.github.io/liquid/&quot;&gt;Liquid language&lt;/a&gt; (made by Shopify!) to make it easier to write and manage content. Liquid provides control flow structures, variables, and functions. One useful function allows including the contents of other files into the current file (the equivalent of partials in Rails). Here is an example that uses the &lt;a href=&quot;https://jekyllrb.com/docs/includes/&quot;&gt;Liquid include function&lt;/a&gt;:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/7282f29c140ef38e79fec05f9c537657.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;As we can see in the example above, the code renders the Liquid and Markdown to HTML. This is achieved by rendering the Liquid first, then passing the result of that into the Markdown renderer. Additionally, the Liquid include function injected the contents of &lt;code&gt;_included.liquid&lt;/code&gt; exactly where the &lt;code&gt;include&lt;/code&gt; function was called in &lt;code&gt;main.md&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Now that the basics of Markdown and Liquid rendering have been explained, it is now possible to understand this segfault issue.&lt;/p&gt;
&lt;h3 id=&quot;where-is-this-segfault-coming-from&quot;&gt;“Where is this segfault coming from???”&lt;/h3&gt;
&lt;p&gt;When my team and I were close to launching the new Help Centre that used Brodie, the custom-built Liquid and Markdown rendering engine, the app would crash due to segmentation faults. When the servers were put under load with many requests coming in, the segfaults and resulting downtime was magnified. It was clear from load testing that a small amount of traffic would bring down the entire site and keep it down.&lt;/p&gt;
&lt;p&gt;The segfaulting would lead to servers becoming unavailable until Kubernetes, the cluster manager, checked that those servers were unhealthy and restarted them. The time it took for the pod to come back online would be 30-60 seconds. With the system being under load, it was only a couple of minutes before all the servers in the cluster were down. When this happened, the app returned HTTP 502 and 503 errors to any client requesting a page – never a good sign.&lt;/p&gt;
&lt;p&gt;The only message that was present in the logs before the app died was the following:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Assertion failed: (md-&gt;work_bufs[BUFFER_BLOCK].size == 0), function sd_markdown_render, file markdown.c, line 2544.&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Apparently, Ruby crashed in a random Redcarpet C function call. No sort of stacktrace or helpful logging followed this message. The logs did not even include which page the client requested, as the usual Rails request logging was created after the HTTP request finished. This &lt;code&gt;Assertion failed&lt;/code&gt; message was a lead, but didn’t help much since it does not reference what caused it.&lt;/p&gt;
&lt;p&gt;I have dealt with other Redcarpet issues in the past, where methods that have been extended in Redcarpet to add custom behaviour have thrown exceptions. Sometimes these exceptions have caused the request to fail and a stacktrace of the issue to show up. Other times it has resulted in a segfault with a similar Redcarpet C function in the message. Ultimately, writing better code fixed this earlier situation.&lt;/p&gt;
&lt;p&gt;My intuition told me that an error was being thrown while rendering the page, causing this segfault to occur. I attempted an experiment where I added some rescue blocks to the Redcarpet methods that we extended. This would prevent the potential exceptions from being raised in the buggy code that was causing it, hopefully resulting in no segfaults. If that fix succeeded, I could safely assume that fixing the code which raised the error would be the end of the story.&lt;/p&gt;
&lt;p&gt;Trying this, the experiment was shipped to production. Things went well until the next day. Sometime overnight the page that caused the segfaults was hit, and the operational dashboards recorded the cluster going down and rebooting. At least this confirmed that the Redcarpet extensions were not at fault.&lt;/p&gt;
&lt;h3 id=&quot;getting-lucky&quot;&gt;Getting lucky&lt;/h3&gt;
&lt;p&gt;Playing around with things, a page was found out of sheer luck that could cause the app to segfault repeatedly. Visiting this page once did not cause the server to crash or the response to 500, but refreshing the page multiple times did cause the server to crash. Since this app was running multiple threads in the local development and production environments to answer requests in parallel, it is possible that there was a shared Redcarpet data structure that was getting &lt;a href=&quot;https://en.wikipedia.org/wiki/Clobbering&quot;&gt;clobbered&lt;/a&gt; by multiple threads writing to it at the same time. This is actually a recurring issue according to the community:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/vmg/redcarpet/issues/318&quot;&gt;https://github.com/vmg/redcarpet/issues/318&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/vmg/redcarpet/issues/570&quot;&gt;https://github.com/vmg/redcarpet/issues/570&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/vmg/redcarpet/issues/176&quot;&gt;https://github.com/vmg/redcarpet/issues/176&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://gitlab.com/gitlab-org/gitlab-ce/issues/36637&quot;&gt;https://gitlab.com/gitlab-org/gitlab-ce/issues/36637&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&quot;recursive-rendering&quot;&gt;Recursive rendering&lt;/h2&gt;
&lt;p&gt;Discussing the issue more with my larger team of developers, there was the idea of removing any sort of cross-thread sharing of Redcarpet’s configuration object. One of the other developers shipped a PR which gave each thread its own Redcarpet configuration object, but this did not end up fixing the problem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/07/Depth-first-tree.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;A tree, showing the order in which nodes are traversed using the depth-first search algorithm. (&lt;a href=&quot;https://commons.wikimedia.org/wiki/File:Depth-first-tree.png&quot;&gt;CC 3.0&lt;/a&gt;)&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Building on top of this developer’s work, I knew that it was possible for the Redcarpet renderer to be called recursively based on the nature of the Liquid and Markdown content files. As described earlier, it is possible for one content file to &lt;code&gt;include&lt;/code&gt; another content file. As we saw in the examples earlier in this article, when a content file is being rendered, the rendering pauses on that content file and descends into the included file to render it, then returns to where it left off in the original content file. This behaviour is exactly like the &lt;a href=&quot;https://en.wikipedia.org/wiki/Depth-first_search&quot;&gt;depth-first search algorithm&lt;/a&gt; from graph theory.&lt;/p&gt;
&lt;p&gt;After making this breakthrough it was simple to understand what to try next. Each time Redcarpet was being called to render some Markdown, always create a new Redcarpet configuration object. This should solve the issue of multiple thread writes, as well as the recursive writes. Even though there is extra overhead with creating a new Redcarpet configuration object each time a content file is rendered, it is a reliable workaround that bypasses Redcarpet’s single-thread, single-writer limitation.&lt;/p&gt;
&lt;p&gt;After coding and shipping this fix, it worked!&lt;/p&gt;
&lt;p&gt;Refreshing that problematic page multiple times, no matter how many times, never crashed the app. The production servers were back to handling their original capacity and one developer was feeling very relieved.&lt;/p&gt;
&lt;h3 id=&quot;takeaways&quot;&gt;Takeaways&lt;/h3&gt;
&lt;p&gt;I learned a considerable number of things from this debugging experience. Even when using battle-tested software (like Redcarpet), there may be use cases which are not exactly supported or documented to not work. Additionally, the Redcarpet library is now rarely maintained. Knowing the limitations up front can save time and frustration. One of the main reasons why this article was written was that there was no other writing about this issue and the workarounds. Hopefully it will help save time for developers in the future who run into similar issues.&lt;/p&gt;
&lt;p&gt;It was valuable to bounce ideas off of other team members. If I had not put out my ideas and had these discussions, I would not have understood the problem as well as I did. Even the potential fix that a teammate of mine shipped but did not end up working helped me understand the problem better.&lt;/p&gt;
&lt;p&gt;Drawing out parts of the control flow on paper to really understand how the app renders content files builds a better mental model of what actually goes on inside the app. It is one thing to have a high level overview of how different components interact with each other, but it is an entirely different level of understanding to factually know what exactly happens. This can be extended to the intricacies of the software libraries being used. In this situation, knowing the internals and behaviour of Ruby on Rails, Liquid, and Redcarpet made it a lot easier to understand what was going on.&lt;/p&gt;
&lt;p&gt;Lastly, you always feel like a boss when you fix big, complicated problems such as this one.&lt;/p&gt;</content:encoded></item><item><title>Twilio&apos;s TaskRouter Quickstart</title><link>https://jonsimpson.ca/twilios-taskrouter-quickstart/</link><guid isPermaLink="true">https://jonsimpson.ca/twilios-taskrouter-quickstart/</guid><description>Twilio&apos;s TaskRouter Quickstart</description><pubDate>Sat, 26 May 2018 16:59:19 GMT</pubDate><content:encoded>&lt;p&gt;My team and I are exploring different services and technologies in the area of contact centres. We develop and maintain the tools for over 1000 support agents, with the number rapidly rising. Making smart, long-term business and technology decisions are paramount. One of the technologies we looked into was Twilio and its ecosystem – specifically TaskRouter.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/03/Twilio-Mark-Red.png&quot; alt=&quot;&quot;&gt;Twilio’s TaskRouter provides a clean interface for building contact centres. Its goal is to take the tedious infrastructure and plumbing work out of building a custom contact centre, exposing the right APIs to implement domain logic. TaskRouter is a high-level service since it orchestrates voice, SMS, and other communication channels with the ability to assign incoming interactions across a workforce of agents ready to take those interactions.&lt;/p&gt;
&lt;h2 id=&quot;twilio-ruby&quot;&gt;Twilio-Ruby&lt;/h2&gt;
&lt;p&gt;To get a head start at understanding how TaskRouter works, I spent a day looking at &lt;a href=&quot;https://www.twilio.com/docs/quickstart/ruby/taskrouter&quot;&gt;Twilio’s Ruby quickstart guide for TaskRouter&lt;/a&gt;. Wow, was I in for a frustrating time.&lt;/p&gt;
&lt;p&gt;The quickstart guide takes the reader through a number of steps, both inside of the Twilio Console as well as building a small Ruby Sinatra app. After completing the quickstart the reader should have a fully functioning call centre with an interactive voice response (IVR) to greet and queue any user that calls in.&lt;/p&gt;
&lt;p&gt;Some of the things that made the quickstart harder to complete is that the Ruby code examples included throughout used an older version of the &lt;a href=&quot;https://github.com/twilio/twilio-ruby&quot;&gt;&lt;code&gt;twilio-ruby&lt;/code&gt; gem&lt;/a&gt;. Because of this, the code examples didn’t work with the latest version. This was both a bad and good thing. Bad in that the existing code examples wouldn’t work out of the box, but good in the fact that I had to put in some extra effort into learning where the docs and other sources of help exist, and having a deeper understanding of how the Twilio API works.&lt;/p&gt;
&lt;p&gt;I compiled a list of resources that would assist anyone going through the same or a similar situation. It certainly helped me complete the TaskRouter quickstart.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/twilio/twilio-ruby/blob/master/README.md&quot;&gt;The README&lt;/a&gt; for the twilio-ruby gem provided a great overview of what functionality it provides and how the gem is to be used&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/twilio/twilio-ruby/wiki/Ruby-Version-5.x-Upgrade-Guide&quot;&gt;The v4 to v5 upgrade guide&lt;/a&gt; for the twilio-ruby gem showed that there was some sense to this chaos by providing the rationale and examples for updating old versions of the twilio-ruby code to the latest (v5). This was where I had my moment of understanding for the quickstart code examples.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/twilio/twilio-ruby/wiki/JWT-Tokens&quot;&gt;Using JWT tokens&lt;/a&gt; was part of the last section of the quickstart. Since twilio-ruby changed the way it uses tokens, its code examples had to be updated too. The main &lt;a href=&quot;https://www.twilio.com/docs/api/taskrouter/constructing-jwts&quot;&gt;Twilio docs on JWT&lt;/a&gt; goes into intricacies around building policies contained within JWT tokens&lt;/li&gt;
&lt;li&gt;My lead/manager was quite happy when I mentioned to him that the twilio-ruby gem no longer uses title case for situations where camel-case or snake-case would have been better to Ruby styling. TwiML was affected by this for a number of gem versions up until v5. Since TwiML is used frequently throughout the quickstart the docs for using &lt;a href=&quot;https://github.com/twilio/twilio-ruby/wiki/TwiML&quot;&gt;TwiML in twilio-ruby&lt;/a&gt; helped during those times.&lt;/li&gt;
&lt;li&gt;Lastly, if all else fails, feel free to reference my resulting code from the TaskRouter quickstart. &lt;a href=&quot;https://github.com/jonniesweb/twilio-taskrouter-example&quot;&gt;It’s available here on GitHub&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title>How Does Symmetric and Public Key Encryption Work?</title><link>https://jonsimpson.ca/how-does-symmetric-and-public-key-encryption-work/</link><guid isPermaLink="true">https://jonsimpson.ca/how-does-symmetric-and-public-key-encryption-work/</guid><description>How Does Symmetric and Public Key Encryption Work?</description><pubDate>Sat, 05 May 2018 16:47:55 GMT</pubDate><content:encoded>&lt;p&gt;With the release of Rails 5.2 and the changes with how secrets are securely stored, I thought it would be timely to write about the benefits and downsides of secrets management in Rails. It would be valuable to compare how Rails handles secrets, how Shopify handles secrets, and a few other methods from the open source community. On my journey to write about this I got caught up in explaining how symmetric and public key encryption work. So the post comparing different Rails secret management gems will have to wait until another post.&lt;/p&gt;
&lt;h2 id=&quot;managing-secrets-is-now-more-challenging&quot;&gt;Managing secrets is now more challenging&lt;/h2&gt;
&lt;p&gt;A majority of applications created these days integrate with other applications – whether it’s for communicating with other business-critical systems, or purely operational such as log aggregation. Secrets such as usernames, passwords, and API keys are used by these apps in production to communicate with other systems securely.&lt;/p&gt;
&lt;p&gt;The early days of the Configuration Management, and then later the DevOps movements have rallied and popularized a wide array of methodologies and tools around managing secrets in production. Moving from a small, artisanal, hand-crafted set of long-running servers to the modern short-lifetime cloud instance paradigm now requires the discipline to manage secrets securely and repeatedly, with the agility to revoke and update credentials in a matter of hours if not minutes.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Reactions to a rare human initiated deploy at &lt;a href=&quot;https://twitter.com/Shopify?ref_src=twsrc%5Etfw&quot;&gt;@shopify&lt;/a&gt; &lt;a href=&quot;https://t.co/l4eO5wrsMr&quot;&gt;pic.twitter.com/l4eO5wrsMr&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;— Camilo Lopez (@camilolopez) &lt;a href=&quot;https://twitter.com/camilolopez/status/925448439890169856?ref_src=twsrc%5Etfw&quot;&gt;October 31, 2017&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;script async charset=&quot;utf-8&quot; src=&quot;https://platform.twitter.com/widgets.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;While there’s many ways to handle secrets while developing, testing, and deploying Rails applications, it’s important to bring up the benefits and downsides to the different methods, particularly around production. Different levels of security, usability, and adoption exist with different technologies. Public/private key encryption, also known as RSA encryption, is one of the technologies. Symmetric key encryption is also another common encryption technology.&lt;/p&gt;
&lt;p&gt;There exist many ways to handle secrets within Rails and webapps in general. It’s important to understand the underlying concepts before settling on one method or another because making the wrong decision may result in secrets being insecure, or the security being too hard to use.&lt;/p&gt;
&lt;p&gt;Let’s first discuss the different types of encryption that are characteristic of the majority of secret management libraries and products out there.&lt;/p&gt;
&lt;h2 id=&quot;symmetric-key-encryption&quot;&gt;Symmetric Key Encryption&lt;/h2&gt;
&lt;p&gt;Symmetric key encryption may be the simplest form of encryption to understand, but don’t let that trick you into thinking that it’s not secure. Symmetric key encryption involves one &lt;em&gt;key&lt;/em&gt; used to both encrypt and decrypt data. This key will have to be kept secret and only be shared with trusted people and systems. Once secrets are encrypted with the key, that encrypted data can be readily shared and transferred without worry of the unencrypted data being read.&lt;/p&gt;
&lt;p&gt;A simple example of symmetric key encryption can be explained. The most straightforward method utilizes the binary XOR function. (This example is not representative of state of the art symmetric key encryption algorithms in use, but it does get the point across). The binary XOR function means “one or the other, but not both”. Here is an example that shows the complete set of inputs and outputs for one binary digit:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;1 XOR 1 = 0&lt;/code&gt;&lt;br&gt;
&lt;code&gt;1 XOR 0 = 1&lt;/code&gt;&lt;br&gt;
&lt;code&gt;0 XOR 1 = 1&lt;/code&gt;&lt;br&gt;
&lt;code&gt;0 XOR 0 = 0&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;A more complicated example would be:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;10101010 XOR 01010101 = 11111111&lt;/code&gt;&lt;br&gt;
&lt;code&gt;11111111 XOR 11111111 = 00000000&lt;/code&gt;&lt;br&gt;
&lt;code&gt;11111111 XOR 01010101 = 10101010&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Note that line 1 and 3 are related. The output of line 1 is part of the input of line 3. The second parameter of line 1 is used as the second parameter of line 3 too. Notice that the output of line 3 is the same as the first input of line 1. As demonstrated here, the XOR function will return the same input if the result of the function is fed back into itself a second time. A further example will show this property.&lt;/p&gt;
&lt;p&gt;Given the property that any higher form of data representation can be broken down to binary, we can then show the example of hexadecimal digits being XOR’ed with another parameter.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;12345678 XOR deadbeef = cc99e897&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Given the key is the hexadecimal characters &lt;code&gt;deadbeef&lt;/code&gt; and the data to be encrypted is &lt;code&gt;12345678&lt;/code&gt;, the result of the XOR is the incomprehensible result &lt;code&gt;cc99e897&lt;/code&gt;. Guess what? This &lt;code&gt;cc99e897&lt;/code&gt; is encrypted. It can be saved and passed around freely. The only way to get the secret input (ie. &lt;code&gt;12345678&lt;/code&gt;) is to XOR it again with the key &lt;code&gt;deadbeef&lt;/code&gt;. Let’s see this happen!&lt;/p&gt;
&lt;p&gt;&lt;code&gt;cc99e897 XOR deadbeef = 12345678&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Fact check it yourself if you don’t believe me, but we just decrypted the data! This is the simplest example of course, so there’s a lot more that goes into symmetric key encryption that keeps it secure. Things like block-based, and stream-based algorithms, and even larger key sizes augment the simple XOR algorithm to make it more secure. It may be simple for someone who wants to break the encryption to guess the key in this example, but it becomes much harder the longer the key size is.&lt;/p&gt;
&lt;p&gt;This is what makes symmetric key encryption so powerful – the ability to encrypt and decrypt data with a single key. With this property comes the need to keep this single key secret and separate from the data. When symmetric key encryption is used in practice, the smaller amount of people and systems that have the key the better. Humans can easily lose the key, leave jobs, or worse: share the key with people of malicious intent.&lt;/p&gt;
&lt;h2 id=&quot;public-key-encryption&quot;&gt;Public Key Encryption&lt;/h2&gt;
&lt;p&gt;Quite opposite to how symmetric key encryption works, public key encryption, (or asymmetric key encryption, or RSA encryption) uses two distinct keys. In its simplest form the public key is used for encryption and the private key is used for decryption. This method of encryption separates the need for the user who is encrypting the data from having the ability to decrypt the data. Put plainly, it allows for anyone to encrypt data with the public key while the owner of the private key is the only one able to decrypt the data. The public key can be shared with anyone without compromising the security of the encrypted data.&lt;/p&gt;
&lt;p&gt;Some tradeoffs between symmetric and public key encryption is that the private key (the key used to decrypt data) is never shared with other parties, whereas the same key is used in symmetric key encryption. Also, a downside of public key encryption is that there are multiple keys to manage, therefore it brings a higher level of overhead compared to symmetric key encryption.&lt;/p&gt;
&lt;p&gt;Let’s dig into a simple example. Given a public key (&lt;code&gt;n=55, e=3&lt;/code&gt;) and a private key (&lt;code&gt;n=55, d=27&lt;/code&gt;) we can show the math behind public key encryption. (These numbers were &lt;a href=&quot;http://haifux.org/lectures/161/RSA-lecture.pdf&quot;&gt;fetched from here&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id=&quot;encrypting&quot;&gt;Encrypting&lt;/h3&gt;
&lt;p&gt;To encrypt data the function is:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;c = m^e mod n&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Where &lt;code&gt;m&lt;/code&gt; is the data to encrypt, &lt;code&gt;e&lt;/code&gt; is the public key, &lt;code&gt;mod&lt;/code&gt; is the modulus function, &lt;code&gt;n&lt;/code&gt; is the shared modulus, and &lt;code&gt;c&lt;/code&gt; is the encrypted data.&lt;/p&gt;
&lt;p&gt;For the number &lt;code&gt;42&lt;/code&gt; to be encrypted we can plug it into the formula quite simply:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;c = 42^3 mod 55&lt;/code&gt;&lt;br&gt;
&lt;code&gt;c = 3&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;c = 3&lt;/code&gt; is our encrypted data.&lt;/p&gt;
&lt;h3 id=&quot;decrypting&quot;&gt;Decrypting&lt;/h3&gt;
&lt;p&gt;Decrypting takes a similar route. For this a similar formula is used:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;m = c^d mod n&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Where &lt;code&gt;c&lt;/code&gt; is the encrypted data, &lt;code&gt;d&lt;/code&gt; is part of the private key, &lt;code&gt;mod&lt;/code&gt; is the modulus function, &lt;code&gt;n&lt;/code&gt; is the shared modulus, and &lt;code&gt;m&lt;/code&gt; is the decrypted data. Lets decrypt the encrypted data &lt;code&gt;c = 3&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;m = 3^27 mod 55&lt;/code&gt;&lt;br&gt;
&lt;code&gt;m = 42&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;And there we have it, our decrypted data is back!&lt;/p&gt;
&lt;p&gt;As we can see, a separate key is used for encryption and decryption. It’s worth restating that this example here is very simplified. Many more mathematical optimizations, and larger key sizes are used to make public key encryption secure.&lt;/p&gt;
&lt;h3 id=&quot;signing--a-freebie-with-public-key-encryption&quot;&gt;Signing – a freebie with public key encryption&lt;/h3&gt;
&lt;p&gt;Another benefit to using RSA public and private keys is that given the private key is only held by one user, that user can &lt;em&gt;sign&lt;/em&gt; a piece of data to verify that it was them who actually sent it. Anyone who has the matching public key can verify that the data was signed by the private key and that the data was not tampered with during transit.&lt;/p&gt;
&lt;p&gt;When Bob needs to receive data from Alice and Bob needs to be sure it was sent by Alice, as well as not tampered with while being sent, Alice can hash the data and then encrypt that hash with her private key. This encrypted hash is then sent along with the data to Bob. Bob can then use Alice’s public key to decrypt the hash and compare it to a hash of the data that he performs. If both of the hashes match, Bob knows that the data was truly from Alice and was not tampered with while being sent to him.&lt;/p&gt;
&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;To pick one method of encryption as the general winner at this abstract level is nonsensical. It makes sense to have a use case and pick the best encryption method for it by finding the best fit at the abstract level first, then finding a library which offers that method of encryption.&lt;/p&gt;
&lt;p&gt;A following post will go into the tradeoffs between different encryption methods in relation to keeping secrets in Ruby on Rails applications. It will take a practical approach, explaining some of the benefits of one encryption method over another, and then give some examples of well-known libraries for each category.&lt;/p&gt;</content:encoded></item><item><title>Parallel GraphQL Resolvers with Futures</title><link>https://jonsimpson.ca/parallel-graphql-resolvers-with-futures/</link><guid isPermaLink="true">https://jonsimpson.ca/parallel-graphql-resolvers-with-futures/</guid><description>Parallel GraphQL Resolvers with Futures</description><pubDate>Sun, 08 Apr 2018 16:49:49 GMT</pubDate><content:encoded>&lt;p&gt;My team and I are building a GraphQL service that wraps multiple RESTful JSON services. The GraphQL server connects to backend services such as Zendesk, Salesforce, and even Shopify itself.&lt;/p&gt;
&lt;p&gt;Our use case involves returning results from these backend services all from the same GraphQL query. When the GraphQL server goes out to query all of these backend services, each backend service can take multiple seconds to respond. This is a terrible experience if queries take many seconds to complete.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/04/graphql-card.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;
&lt;p&gt;Since we’re running the &lt;a href=&quot;http://graphql-ruby.org/&quot;&gt;GraphQL server in Ruby&lt;/a&gt;, we don’t get provided the nice asynchronous IO that would come with the &lt;a href=&quot;http://graphql.org/graphql-js/&quot;&gt;NodeJS version of GraphQL&lt;/a&gt;. Because of this, the GraphQL resolvers run serially instead of in parallel – thus a GraphQL query to five backend services which take one second each to fetch data from will result in the query taking five seconds to run.&lt;/p&gt;
&lt;p&gt;For our use case, having a GraphQL query that takes five seconds is a bad experience. What we would prefer is 2 seconds or less. This means performing some optimizations when GraphQL goes to do the HTTP requests to the backend services. Our idea is to parallelize those HTTP requests.&lt;/p&gt;
&lt;h2 id=&quot;first-approaches&quot;&gt;First Approaches&lt;/h2&gt;
&lt;p&gt;To parallelize those HTTP requests we took a look at non-blocking HTTP libraries, different GraphQL resolvers, and Ruby concurrency primitives.&lt;/p&gt;
&lt;h3 id=&quot;typhoeus&quot;&gt;Typhoeus&lt;/h3&gt;
&lt;p&gt;Knowing that running the HTTP requests in parallel is the direction to explore, we first took a look at the Ruby library &lt;a href=&quot;https://github.com/typhoeus/typhoeus&quot;&gt;Typhoeus&lt;/a&gt;. Typhoeus offers a simple abstraction for performing parallel HTTP requests by wrapping the C library libcurl. Below is one of the many possible ways to use Typhoeus.&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/5fc677b08a071c05e7c64b12af663151.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;After playing around with Typheous, we quickly found out that it wasn’t going to work without extending the GraphQL Ruby library. It became clear that it was nontrivial to wrap a GraphQL resolver’s life cycle with a Hydra from Typhoeus. A Hydra basically being a Future that runs multiple HTTP requests in parallel and returns when all requests are complete.&lt;/p&gt;
&lt;h3 id=&quot;lazy-execution&quot;&gt;Lazy Execution&lt;/h3&gt;
&lt;p&gt;We also took a look at the &lt;a href=&quot;http://graphql-ruby.org/schema/lazy_execution.html&quot;&gt;GraphQL Ruby’s lazy execution features&lt;/a&gt;. We had a hope that the lazy execution would automatically optimize by running resolvers in parallel. It didn’t. Oh well.&lt;/p&gt;
&lt;p&gt;We also tried a perverted version of lazy execution. I can’t remember why or how we came up with this method, but it was obviously overcomplicated for no good reason and didn’t work 😆&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/6fb89f4050b4b06eff199247bcf16309.js&quot;&gt;&lt;/script&gt;
&lt;h3 id=&quot;threads-and-futures&quot;&gt;Threads and Futures&lt;/h3&gt;
&lt;p&gt;We looked back and understood the shortcomings of the earlier methods – namely, we had to find a concurrency method that would allow us to do the HTTP requests in the background without blocking the main thread until it needed the data. Based on this understanding we took a look at some Ruby concurrency primitives – both Futures (from the &lt;a href=&quot;https://github.com/ruby-concurrency/concurrent-ruby&quot;&gt;Concurrent Ruby library&lt;/a&gt;), and &lt;a href=&quot;https://ruby-doc.org/core-2.5.0/Thread.html&quot;&gt;Threads&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I highly recommend using higher-order concurrency primitives such as Futures, and the like because of their well-defined and simple APIs, but for hastily hacking something together to see if it would work I experimented with Threads.&lt;/p&gt;
&lt;p&gt;My teammate ended up figuring out a working example of Futures faster than I could hack my threads example together. (I’m glad they did, since we’ll see why next.) Here is a simple use of Futures in GraphQL:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/98870831d03616fad49b174ecfc23422.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;It’s not clear at first, but according to the GraphQL Ruby docs, any GraphQL resolver can return the data or can return something that can then return the data. In the code example above, we use the latter by returning a &lt;a href=&quot;http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Future.html&quot;&gt;&lt;code&gt;Concurrent::Future&lt;/code&gt;&lt;/a&gt; in each resolver, and having the &lt;code&gt;&amp;#x3C;a href=&quot;http://www.rubydoc.info/gems/graphql/GraphQL/Field#lazy_resolve-instance_method&quot;&gt;lazy_resolve&amp;#x3C;/a&gt;(Concurrent::Future, :value!)&lt;/code&gt; in the GraphQL schema. This means that when a resolver returns a &lt;code&gt;Concurrent::Future&lt;/code&gt;, the &lt;code&gt;lazy_resolve&lt;/code&gt; part tells GraphQL Ruby to call &lt;code&gt;:value!&lt;/code&gt; on the future when it really needs the data.&lt;/p&gt;
&lt;p&gt;What does all of this mean? When GraphQL goes to fulfill a query, all the resolvers involved with the query quickly spawn Futures that start executing in the background. GraphQL then moves to the phase where it builds the result. Since it now needs the data from the Futures, it calls the potentially blocking operation &lt;code&gt;value!&lt;/code&gt; on each Future.&lt;/p&gt;
&lt;p&gt;The beautiful thing here is that we don’t have to worry about whether the Futures have finished fetching their data yet. This is because of the powerful contract we get with using Futures – the call to &lt;code&gt;value!&lt;/code&gt; (or even just &lt;code&gt;value&lt;/code&gt;) &lt;a href=&quot;https://ruby-concurrency.github.io/concurrent-ruby/Concurrent/Future.html&quot;&gt;will block until the data is available&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;We ended up settling on the last design – utilizing Futures to allow the main thread to put as much asynchronous work into background.&lt;/p&gt;
&lt;p&gt;As seen through our thought process, all that we needed was to find a way to start execution of a long-running HTTP request, and give back control to the main thread as fast as possible. It was quite clear throughout the early ideas of utilizing concurrent HTTP request libraries (Typhoeus) that we were on the right track, but weren’t understanding the problem perfectly.&lt;/p&gt;
&lt;p&gt;Part of that was not understanding the GraphQL Ruby library. Part of it was also being fuzzy on our concurrent primitives and libraries. Once we had taken a look at GraphQL Ruby’s lazy loading features, it became clear to us that we needed to kick-off the HTTP request and immediately give back control to the GraphQL Ruby library. Once we understood this, the solution became clear and we became confident after some prototypes that used Futures.&lt;/p&gt;
&lt;p&gt;I enjoyed the problem solving process we went through, as well as this writing that resulted from it. The problem solving process ended up teaching the both of us some valuable engineering lessons about collaborative, up-front prototyping and design since we couldn’t have achieved this outcome on our own. Additionally, writing about this success can help others with our direct problem, not to mention learning about the different techniques that we met along the way.&lt;/p&gt;</content:encoded></item><item><title>Zero to One Hundred in Six Months: Notes on Learning Ruby and Rails</title><link>https://jonsimpson.ca/zero-to-one-hundred-in-six-months-notes-on-learning-ruby-and-rails/</link><guid isPermaLink="true">https://jonsimpson.ca/zero-to-one-hundred-in-six-months-notes-on-learning-ruby-and-rails/</guid><description>Zero to One Hundred in Six Months: Notes on Learning Ruby and Rails</description><pubDate>Sat, 20 Jan 2018 20:20:09 GMT</pubDate><content:encoded>&lt;p&gt;When they say you’ll like learning and programming in Ruby, they really mean it. From my experience learning and professionally using Ruby and Ruby-on-Rails day-to-day has been quite straightforward and friendly. The rate at learning Ruby and Rails is limited to how fast you’re able to obtain and use that knowledge from either resources online, in a book, or from other people.&lt;/p&gt;
&lt;p&gt;It’s common for people who join Shopify to not know how to program in Ruby, yet will be required to. Ruby’s community has grown a great deal of beginner to intermediate guides for newcomers to quickly get up to speed at programming in Ruby. At Shopify, since the feedback is so fast, you’re able to get into an intense virtuous cycle of learning. Since you’re able to code and ship fast, you’re able to learn faster.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/01/rails5-1.jpg&quot; alt=&quot;&quot;&gt;From personal experience I found it quite useful to do a deep-dive into Rails before starting to learn the full Ruby language. I focused on reading the entire &lt;a href=&quot;https://pragprog.com/book/rails5/agile-web-development-with-rails-5&quot;&gt;Agile Web Development with Rails 5 book&lt;/a&gt;, which consists of a short primer on Ruby, then the bulk being how to develop an online store using Rails, and lastly an in-depth look into each Rails module. I completed this book over the two weeks before starting at Shopify to give me a head-start at learning.&lt;/p&gt;
&lt;p&gt;Roughly the first two months spent at Shopify were a split of working on small tasks by myself, pair-programming with others, and reading a number of Ruby and Rails articles. At the end of two months I found myself being able to take pieces of work from our weekly sprints and completing them to the end without feeling like I was slow, and not requiring a team member to guide me through the entire change.&lt;/p&gt;
&lt;p&gt;Code reviews over GitHub on the changes that myself and others have made provided a strong signal on how well my Ruby and Rails knowledge and style has progressed. Code reviews for my code at the start consisted of a lot of comments on style and better methods to use. As more and more code reviews were performed over time my intuition and knowledge increased, resulting in better code and less review comments. The bite-sized improvements gained in each code review slowly built up my knowledge and helped guide me towards areas of further learning.&lt;/p&gt;
&lt;p&gt;Mastering Ruby and Rails is gained over months to years of constant use. This is where the lesser-known to obscure language features are understood and put to use, or explicitly not put to use (I’m looking at you metaprogramming!) Some examples being the &lt;a href=&quot;http://vaidehijoshi.github.io/blog/2015/06/02/code-smells-and-ruby-shorthand-unpacking-ampersand-plus-to-proc/&quot;&gt;unary &amp;#x26; operator&lt;/a&gt;, &lt;a href=&quot;https://bparanj.gitbooks.io/ruby-basics/content/part2/binding.html&quot;&gt;bindings&lt;/a&gt;, and even Ruby internals such as &lt;a href=&quot;https://brandur.org/ruby-memory&quot;&gt;object allocation&lt;/a&gt; and &lt;a href=&quot;https://www.nasseri.io/posts/1.html&quot;&gt;how Ruby programs are interpreted&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Coming from the statically-typed Java world, Ruby and Rails is INSANE with the things you can do since it is a dynamic language. My favourite dynamic language related features so far are &lt;a href=&quot;http://gshutler.com/2013/04/ruby-2-module-prepend/&quot;&gt;Module#prepend&lt;/a&gt; for inheritance, and the ability to live-reload code.&lt;/p&gt;
&lt;p&gt;After a sufficient amount of time gathering knowledge and experience, you gain the ability to help others along their path of learning. This not only benefits their understanding, but it also reinforces your knowledge of the subject.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2018/01/RUM_coverfront.png&quot; alt=&quot;&quot;&gt;Some of the things I look forward to in the future are learning about optimizing Rails apps, dispelling metaprogramming, reading &lt;a href=&quot;http://patshaughnessy.net/ruby-under-a-microscope&quot;&gt;Ruby Under a Microscope&lt;/a&gt;, and digging into the video back-catalogue of &lt;a href=&quot;https://www.destroyallsoftware.com/&quot;&gt;Destroy All Software&lt;/a&gt;. I hope you have a good journey too!&lt;/p&gt;</content:encoded></item><item><title>What the hack? - or how my first capture the flag went</title><link>https://jonsimpson.ca/what-the-hack-or-how-my-first-capture-the-flag-went/</link><guid isPermaLink="true">https://jonsimpson.ca/what-the-hack-or-how-my-first-capture-the-flag-went/</guid><description>What the hack? - or how my first capture the flag went</description><pubDate>Thu, 30 Nov 2017 03:16:30 GMT</pubDate><content:encoded>&lt;p&gt;The 2017 &lt;a href=&quot;http://bsidesottawa.ca&quot;&gt;BSides Security Conference&lt;/a&gt;, just outside of Ottawa, was a two day event from October 5th to 6th. It was packed with talks, lock picking, and a capture the flag (CTF) competition. Pretty great for being a free conference.&lt;/p&gt;
&lt;p&gt;On the second day of the conference I decided to join one of the Shopify CTF teams since it looked like a ton of fun. Actually, I think it was the deep house playing 24/7 was what lured me into the dim and crowded CTF room of the conference centre. I subbed in for one of my friends on Shopify’s Red Team, which was suitably named for Shopify’s second CTF team. Shopify’s first team was named the Blue Team.&lt;/p&gt;
&lt;p&gt;I thought I knew what CTF’s were all about – hacking challenges, they say. But I was completely unprepared. My “so called” 10+ years of listening to the Security Now podcast didn’t exactly prepare me for the hands-on experience required for CTFs. It was quite the learning experience since most of the flags remaining on day 2 were difficult to capture for a newbie.&lt;/p&gt;
&lt;p&gt;Having some of a background in security and hacking helps, though it doesn’t bridge the gap between the hacking experience and intuition required to solve CTF challenges. These challenges require experience and practice in thinking like an attacker.&lt;/p&gt;
&lt;p&gt;For example, it’s one thing to understand that data can be hidden in images via steganography, but it’s another thing completely to actually extract the hidden data from an image.&lt;/p&gt;
&lt;p&gt;Instead of wasting time on finding unknown flags, I focused on the topics I have experience with. Most of the flags I focused on were WEP and WPA cracking with aircrack-ng, and it’s associated collection tools. I was not able to inject packets with my setup, but luckily some other competitors did the hard work for me. After a few hours of unsuccessful attempts to crack the Wi-Fi networks I conceded that my attempts weren’t working.&lt;/p&gt;
&lt;p&gt;I moved onto a new flag that involved breaking into an old exploited version of Joomla. After asking for some help from a teammate we found a script on exploit-db that would raise privileges to admin for any user. After running the exploit it took me a bit to figure out that it ran successfully since the flag was locked inside a Page that was locked for editing by someone else. The ‘locked for editing’ didn’t allow reading the Page, but after figuring out that the Page had a context menu to unlock it enabled me to view the flag. That made me facepalm both at Joomla’s UI and my inability to figure that out sooner.&lt;/p&gt;
&lt;p&gt;After a day filled with a of couple muffins, a few slices of pizza, and countless teas the CTF concluded around 5pm. Winners were announced and thankfully our team didn’t fail too hard. I came out of the competition having met a bunch of colleagues from different parts of the company, and the expectation of what to expect in future CTFs. I’ll definitely be attending another CTF.&lt;/p&gt;
&lt;p&gt;My team, the Red-Team, placed somewhere around 5th or 6th. Not too bad for having a handicap on it like myself. I got to hand it to Shopify, they have some seriously talented Security folks! No wonder Shopify’s Blue-Team came in first!&lt;/p&gt;</content:encoded></item><item><title>Twenty-Three!</title><link>https://jonsimpson.ca/twenty-three/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-three/</guid><description>Twenty-Three!</description><pubDate>Mon, 02 Oct 2017 03:36:11 GMT</pubDate><content:encoded>&lt;p&gt;I’m Twenty-Three dude! The first of October fell on a Sunday this year. The personal milestones and events that have occurred certainly make this a year for me to remember. Here’s a few of the main ones to note:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Completed my Bachelor of Computer Science – &lt;em&gt;five years of hard work has finally(?) paid off&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Started a new job at Shopify – &lt;em&gt;one of the hottest, fastest growing, and prestigious unicorn companies&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Ran my first 10k race during the Ottawa Race Weekend&lt;/li&gt;
&lt;li&gt;Travelled along the California Coast, around the heart of New York, and all over Montreal&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The celebrations started with Saturday morning. A bunch of friends and colleagues of mine grabbed breakfast to wish a colleague farewell and best wishes being back at school.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2017/10/panda-game-2017.gif&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The crowd storms the field as Carleton wins at Sportsball!&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Just like last year, a larger group of us met up at the TD Place for the annual university football Panda Game between Carleton and Ottawa U (Carleton won, of course :P). A lot of time was spent laughing at our once younger selves acting foolish in the crowd. After the win we spent a few hours pigging out at an all-you-can-eat sushi restaurant.&lt;/p&gt;
&lt;p&gt;To finish the day off we visited the pool and hot tub at a friend’s apartment, and played the card game What do you Meme? Sunday, my actual birthday, has been spent relaxing and writing this post. I’m sure my colleagues will have more festivities planned for next week.&lt;/p&gt;
&lt;p&gt;Additionally, this year I got more into road biking – putting just over a thousand kilometres on a new road bike, and visiting a few new locations in the Ottawa-Gatineau area – Cafe British in Alymer, Quebec is worth the trek on a nice sunny day.&lt;/p&gt;
&lt;p&gt;Since I’ve been interested in craft beers, this year I received a homebrewing kit as a gift. The process is laborious for a day or two but is quite rewarding at the end. I’ve completed two large batches so far – a wheat beer and an IPA. They’re pretty drinkable, but the quality isn’t high enough for the ingredients I used. I’ll definitely have to make a higher quality batch of beer, or even delve into making wine.&lt;/p&gt;
&lt;p&gt;At the end of October last year I travelled with my Aunt and a few of her friends to Manhattan in New York. What a city! The sheer size and bustle of everything going on at all hours of the day is intoxicating. The restaurant scene was tasty, even when eating within a reasonable budget. The attractions and Central Park were fascinating. I would recommend any first-timers to get a City-Pass to get access to the major attractions. It was definitely a memorable week.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2017/10/IMG_20170620_134523.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Me shaking Leo’s hand at the TWiT studios.&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;For a graduation trip in June I flew over to California with my mom. We spent a solid 10 days rv-ing and camping from north of San Francisco all the way down to San Diego. The climate, scenery, and attractions made it an amazing experience. We stopped by the TWiT studios to see Leo Laporte, a longtime mentor of mine for 11 years. Leo’s podcasts have taught me a staggering amount about technology news and computer security. It was great to have been able to see him in person and watch a live recording of Security Now with Steve Gibson.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2017/10/IMG_20170625_194241-1.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;A San Diego sunset.&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;During the trip we had to stop by the Computer History museum. Holy cow! That museum is big. I could have spent a few days there reading everything. One particular part of the museum I found memorable was showing my mom a diagram of the history of programming languages, pointing to the ones I knew, and the ones I would soon learn at Shopify. Seeing some of the first Google servers were pretty nostalgic in a way with how scrappy they were to get a lot of computing power for cheap.&lt;/p&gt;
&lt;p&gt;Some of the rough goals I have this year are to advance my career, improve on my social skills, become friends with more people, and build up some muscle. I have surprised myself already with the speed and progress that has been made across these goals so far.&lt;/p&gt;
&lt;p&gt;Since starting work at Shopify the thought of moving closer to work has really been bugging me. A 20 minute bike ride is great in the summertime, but &lt;em&gt;Winter is Coming&lt;/em&gt; – and Ottawa doesn’t fall short with its winters. The social scene with colleagues and all the activities that are had downtown are more attracting factors for moving closer. I’m really liking the idea of moving into a 1-bedroom apartment – gaining more privacy and being able to focus more on myself compared to living with my entertaining and sometimes distracting roommates (in a good way).&lt;/p&gt;
&lt;p&gt;I’m not sure how I can top the major events from this past year. Maybe moving into my very own place, attending a conference or two, performing my first conference talk, or even some travelling with friends? I’m down for all of those.&lt;/p&gt;</content:encoded></item><item><title>Installing Go the Right Way</title><link>https://jonsimpson.ca/installing-go-the-right-way/</link><guid isPermaLink="true">https://jonsimpson.ca/installing-go-the-right-way/</guid><description>Installing Go the Right Way</description><pubDate>Thu, 18 May 2017 03:54:22 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/static/images/2017/05/go-gopher.png&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;100% Derpy&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;It’s a pain to get the latest version of the &lt;a href=&quot;https://golang.org/&quot;&gt;Go programming language&lt;/a&gt; installed on a Debian or Ubuntu system.&lt;/p&gt;
&lt;p&gt;Oftentimes if your operating system is not the latest then there is a slim chance that there will be an up to date version of Go available. You may get lucky and find newer versions of Go in some random person’s PPA, where they’ve backported newer versions to older operating systems. This works, but newly released versions are reliant on the package maintainer to update and push it out.&lt;/p&gt;
&lt;p&gt;Other options of installing the latest version of Go may involve building the package from source. This method is often tedious and can be error prone with the number of steps involved. Not exactly for the faint of heart.&lt;/p&gt;
&lt;p&gt;Command line tools have been built for certain programming languages to streamline the installation of new versions. For Go, &lt;a href=&quot;https://github.com/moovweb/gvm&quot;&gt;GVM is the Go Version Manager.&lt;/a&gt; Inspired by RVM, the Ruby Version Manager, GVM makes it quite simple to install multiple versions of Go, and to switch between them with one simple command.&lt;/p&gt;
&lt;p&gt;The only downside that GVM has is that it’s not installed via a system package (eg. a deb file). Don’t let that worry you too much though! Installation is as simple as running the following curl-bash, and then using the GVM command to start installing different versions of Go. &lt;a href=&quot;https://github.com/moovweb/gvm#installing&quot;&gt;Here’s the installation guide/readme&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;bash &amp;#x3C; &amp;#x3C;(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;One confusing point when using GVM to install the latest version of Go resulted in a failed installation. This made no sense. Eventually RTFM’ing resulted in understanding that you first have to install an earlier version of Go to “&lt;em&gt;bootstrap&lt;/em&gt;” the installation of any version of Go later than 1.5. &lt;a href=&quot;https://github.com/moovweb/gvm#a-note-on-compiling-go-15&quot;&gt;Explained here in more detail&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After following their instructions to install Go 1.4 it was now possible to install the latest version of Go and get on with coding!&lt;/p&gt;</content:encoded></item><item><title>Private Docker Repositories with Artifactory</title><link>https://jonsimpson.ca/private-docker-repositories/</link><guid isPermaLink="true">https://jonsimpson.ca/private-docker-repositories/</guid><description>Private Docker Repositories with Artifactory</description><pubDate>Mon, 08 May 2017 02:54:34 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/static/images/2015/11/docker-large.png&quot; alt=&quot;&quot;&gt;A while ago I was looking into what it takes to setup a private Docker Registry. The simplest way involves running the vanilla Docker Registry image and a small amount of configuration &lt;em&gt;(vanilla&lt;/em&gt; is used to distinguish the official Docker Registry from the Artifactory Docker Registry offering)&lt;em&gt;.&lt;/em&gt; The &lt;a href=&quot;https://docs.docker.com/registry/&quot;&gt;vanilla Docker Registry&lt;/a&gt; is great for proof of concepts or for people who want to design a custom solution, but in organizations where there are multiple environments (QA, staging, prod) wired together using a Continuous Delivery pipeline – &lt;a href=&quot;https://www.jfrog.com/artifactory/&quot;&gt;JFrog Artifactory&lt;/a&gt; is well suited for the task.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2017/05/Artifactory_HEX1.png&quot; alt=&quot;&quot;&gt;Artifactory, the fantastic artifact repository for storing your &lt;a href=&quot;https://www.jfrog.com/confluence/display/RTF/Maven+Repository&quot;&gt;Jars&lt;/a&gt;, &lt;a href=&quot;https://www.jfrog.com/confluence/display/RTF/RubyGems+Repositories&quot;&gt;Gems&lt;/a&gt;, and &lt;a href=&quot;https://www.jfrog.com/confluence/display/RTF/Package+Management&quot;&gt;other valuables&lt;/a&gt; has an extension to host &lt;a href=&quot;https://www.jfrog.com/confluence/display/RTF/Docker+Registry&quot;&gt;Docker Repositories&lt;/a&gt; to store and manage Docker images as first-class citizens of Artifactory.&lt;/p&gt;
&lt;h3 id=&quot;features&quot;&gt;Features&lt;/h3&gt;
&lt;p&gt;Here’s a few compelling features which make Artifactory worthwhile over the vanilla Docker Registry.&lt;/p&gt;
&lt;h5 id=&quot;role-based-access-control&quot;&gt;Role-based access control&lt;/h5&gt;
&lt;p&gt;The Docker Registry image doesn’t come with any fine-grained access control. The best that can be done is either &lt;a href=&quot;https://www.ctl.io/developers/blog/post/how-to-secure-your-private-docker-registry/&quot;&gt;allowing or disallowing access to all operations on the registry by the use of a &lt;em&gt;.htpasswd&lt;/em&gt; file&lt;/a&gt;. In the best scenario, each user of the registry has their own username and password.&lt;/p&gt;
&lt;p&gt;Artifactory uses its own fine-grained access control mechanisms to secure the registry – enabling users and groups to be assigned permissions to read, write, deploy, and modify properties. Access can be configured through the Artifactory web UI, REST API, or AD/LDAP.&lt;/p&gt;
&lt;h5 id=&quot;transport-layer-security&quot;&gt;Transport Layer Security&lt;/h5&gt;
&lt;p&gt;If enabled, Artifactory will use the same TLS encryption it uses for Docker Registries. Unlike a vanilla Docker Registry, there is no need to setup a reverse proxy to tunnel all insecure HTTP connections over HTTPS. The web UI offers a screen to copy and paste authentication information for connecting to the secured Artifactory Registry.&lt;/p&gt;
&lt;h5 id=&quot;data-retention&quot;&gt;Data Retention&lt;/h5&gt;
&lt;p&gt;Artifactory has the option to automatically purge old Docker images when the unique number of tags has grown to a certain size. This keeps the number of available images, and therefore the storage space, within reason. The results of not having old images purged can lead to running out of disk space, or for you cloud users, expensive object storage bills.&lt;/p&gt;
&lt;h5 id=&quot;image-promotion&quot;&gt;Image Promotion&lt;/h5&gt;
&lt;p&gt;Continuous delivery defines the concept of pipelines. These pipelines represent the flow of commits from when a developer checks in their code to the SCM, all the way through CI, and eventually into production. Continuous Delivery organizations who have chosen to use multiple environments for validating their software changes would “promote” a version of the software from one environment to the next. A version would only be promoted if it passed the validation requirements for that environment.&lt;/p&gt;
&lt;p&gt;For example, the promotion of version 123 would first go through the QA environment, then the Staging environment, then the Production environment.&lt;/p&gt;
&lt;p&gt;Artifactory includes Docker image promotion as a first-class feature, separating it from the vanilla Docker Registry. What would be a series of manual steps, or a script to run, is now a single API endpoint to promote a Docker image from one registry to another.&lt;/p&gt;
&lt;h5 id=&quot;browsing-for-images&quot;&gt;Browsing for Images&lt;/h5&gt;
&lt;p&gt;The Artifactory UI already has the ability to look at various artifacts contained in Maven, NPM, and other types of repositories. It was only natural to offer the same service for Docker Registries. All images of a repository can be listed and searched upon. Images can be further described by showing the various tags and layers that compose it.&lt;/p&gt;
&lt;p&gt;The current vanilla Docker Registry doesn’t have a GUI. It is only through third-party projects that a GUI can be provided to offer the same functionality as Artifactory.&lt;/p&gt;
&lt;h5 id=&quot;remote-registries&quot;&gt;Remote Registries&lt;/h5&gt;
&lt;p&gt;Artifactory has the ability to provide a caching layer for registries. Performance is gained when images and metadata are fetched from the cached Artifactory instance, preventing the time and latency incurred from going to the original registry. Resiliency is also gained since the Artifactory instance can continue serving cached images and metadata to satisfy client requests even when the remote registry has become unavailable. (&lt;a href=&quot;https://aws.amazon.com/message/41926/&quot;&gt;S3 outage anyone?&lt;/a&gt;)&lt;/p&gt;
&lt;h5 id=&quot;virtual-registries&quot;&gt;Virtual Registries&lt;/h5&gt;
&lt;p&gt;Besides hosting local and caching remote registries, virtual registries is a combination of the two. Virtual registries unite images from a number of local and remote registries, enabling Docker clients to conveniently use just a single registry. Administrators are then able to change the backing registries when needed, requiring no change on the client’s side.&lt;/p&gt;
&lt;p&gt;This is most useful for humans who need ad hoc access to multiple registries that correspond to multiple environments. For example, the QA, Staging, Production, and Docker Hub registries can be combined together, making it seem like one registry to the user instead of four different instances. Machines running in the Production environment, for example, could only have access to the Production Docker Registry, thereby preventing any accidental usage of unverified images.&lt;/p&gt;
&lt;h3 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Artifactory is a feature rich artifact tool for Maven, NPM, and many other repository types. The addition of Docker Registries to Artifactory provides a simple solution that caters towards organizations who are implementing Continuous Delivery practices.&lt;/p&gt;
&lt;p&gt;If you’re outgrowing an existing vanilla Docker Registry, or entirely new to the Docker game then give Artifactory a try for your organization, it won’t disappoint.&lt;/p&gt;</content:encoded></item><item><title>Practicing User Safety at GitHub</title><link>https://jonsimpson.ca/practicing-user-safety-at-github/</link><guid isPermaLink="true">https://jonsimpson.ca/practicing-user-safety-at-github/</guid><description>Practicing User Safety at GitHub</description><pubDate>Sun, 19 Mar 2017 23:21:51 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://githubengineering.com/community-and-safety-feature-reviews/&quot;&gt;&lt;img src=&quot;/static/images/2017/03/1440-helmet-with-white-cross.png&quot; alt=&quot;&quot;&gt;GitHub explains a few of their guidelines for harassment and abuse prevention when they’re developing new features&lt;/a&gt;. Some of the interesting points in the article include a list of privacy-oriented questions to ask yourself when developing a new feature, providing useful audit logs for retrospectives, and minimizing abuse from newly created accounts by restricting access to the service’s capabilities. All of these points taken into consideration make it harder for abuse to occur, making the service a better environment for its users.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://githubengineering.com/community-and-safety-feature-reviews/&quot;&gt;See the original article.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title>A few Gotchas with Shopify API Development</title><link>https://jonsimpson.ca/a-few-gotchas-with-shopify-api-development/</link><guid isPermaLink="true">https://jonsimpson.ca/a-few-gotchas-with-shopify-api-development/</guid><description>A few Gotchas with Shopify API Development</description><pubDate>Sun, 12 Feb 2017 04:38:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/static/images/2017/02/shopify-logo.png&quot; alt=&quot;&quot;&gt;I had a fun weekend with my roommate hacking on the Shopify API and learning the Ruby on Rails framework. Shopify makes it super easy to begin building Shopify Apps for the Shopify App Store – essentially the Apple App Store equivalent for Shopify store owners to add features to their customer facing and backend admin interfaces. Shopify provides two handy Ruby gems to speed up development: &lt;a href=&quot;https://rubygems.org/gems/shopify_app&quot;&gt;&lt;em&gt;shopify_app&lt;/em&gt;&lt;/a&gt; and &lt;a href=&quot;https://rubygems.org/gems/shopify_api&quot;&gt;&lt;em&gt;shopify_api&lt;/em&gt;&lt;/a&gt;. An overview of the two gems are given and then their weaknesses are explained.&lt;/p&gt;
&lt;p&gt;Shopify provides a handy gem called &lt;em&gt;shopify_app&lt;/em&gt; which makes it simple to start developing an app for the Shopify App Store. The gem provides Rails generators to create controllers, add webhooks, configure the basic models and add the required OAuth authentication – just enough to get started.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;shopify_api&lt;/em&gt; gem is a thin wrapper of the Shopify API. &lt;em&gt;shopify_app&lt;/em&gt; integrates it into the controllers automatically, making requests for a store’s data very simple.&lt;/p&gt;
&lt;h3 id=&quot;frustrations-with-the-api&quot;&gt;Frustrations With the API&lt;/h3&gt;
&lt;p&gt;The process of getting a &lt;a href=&quot;https://www.shopify.ca/partners&quot;&gt;developer account and developer store&lt;/a&gt; created takes no time at all. The &lt;a href=&quot;https://help.shopify.com/api/reference&quot;&gt;API documentation&lt;/a&gt; is clear for the most part. Though attempting to develop using the &lt;em&gt;Plus&lt;/em&gt; APIs can be frustrating when using the APIs for the first time. For example, querying the &lt;a href=&quot;https://help.shopify.com/api/reference/discount&quot;&gt;Discount API&lt;/a&gt;, &lt;a href=&quot;https://help.shopify.com/api/reference/gift_card&quot;&gt;Gift Card API&lt;/a&gt;, &lt;a href=&quot;https://help.shopify.com/api/reference/multipass&quot;&gt;Multipass API&lt;/a&gt;, or &lt;a href=&quot;https://help.shopify.com/api/reference/user&quot;&gt;User API&lt;/a&gt; results in unhelpful 404 errors. The development store’s admin interface is misleading as a discounts section can be accessed where discounts may be added and removed.&lt;/p&gt;
&lt;p&gt;By default, anyone who signs up to become a developer only has access to the standard API endpoints, leaving no access to the &lt;em&gt;Plus&lt;/em&gt; endpoints. These &lt;em&gt;Plus&lt;/em&gt; endpoints are only available to stores which pay for Shopify Plus, and after digging into many Shopify discussion boards it was explained by a Shopify employee that developers need to work with a store who pays for Shopify Plus to get access to those &lt;em&gt;Plus&lt;/em&gt; endpoints. The 404 error when accessing the API didn’t explain this and only added confusion to the situation.&lt;/p&gt;
&lt;p&gt;One area that could be improved is that there is little mention of tiered developer accounts. The API should at least give a useful error message in the response’s body explaining what is needed to gain access to it.&lt;/p&gt;
&lt;h3 id=&quot;webhooks-could-be-easier-to-work-with&quot;&gt;Webhooks Could be Easier to Work With&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2017/02/webhook.png&quot; alt=&quot;&quot;&gt;The &lt;a href=&quot;https://github.com/Shopify/shopify_app#webhooksmanager&quot;&gt;&lt;em&gt;shopify_app&lt;/em&gt; gem provides a simple way to define any webhooks&lt;/a&gt; that should be registered with the Shopify API for the app to function. The defined webhooks are registered only once after the app is added to a store. During development you may add and remove many webhooks for your app. Since defined webhooks are only registered when the app is added to a store the most straightforward way to refresh the webhooks is to remove the app from the store and then add it again.&lt;/p&gt;
&lt;p&gt;This can become pretty tedious which is why I did some digging around in the &lt;em&gt;shopify_app&lt;/em&gt; code and created the following code sample to synchronize the required webhooks with the Shopify API. Simply hit this controller or call the containing code somewhere in the codebase.&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/bd08607fc3c44efcc94ec190d5bec2d2.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;If there’s a better solution to this problem please let me know.&lt;/p&gt;
&lt;p&gt;Lastly, to keep track of your sanity the &lt;a href=&quot;https://rubygems.org/gems/httplog/&quot;&gt;&lt;em&gt;httplog&lt;/em&gt;&lt;/a&gt; gem is useful to track the http calls that &lt;em&gt;shopify_app, shopify_api&lt;/em&gt; and any other gem makes.&lt;/p&gt;
&lt;h3 id=&quot;wrapping-up&quot;&gt;Wrapping Up&lt;/h3&gt;
&lt;p&gt;The developer experience on the Shopify API and app store is quite pleasing. It has been around long enough to build up a flourishing community of people asking questions and sharing code. I believe the issues outlined above can be easily solved and will make Shopify a more pleasing platform.&lt;/p&gt;</content:encoded></item><item><title>The Software Engineering Daily Podcast is Highly Addictive</title><link>https://jonsimpson.ca/the-software-engineering-daily-podcast-is-highly-addictive/</link><guid isPermaLink="true">https://jonsimpson.ca/the-software-engineering-daily-podcast-is-highly-addictive/</guid><description>The Software Engineering Daily Podcast is Highly Addictive</description><pubDate>Sat, 04 Feb 2017 03:24:59 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/static/images/2016/03/se-daily-logo.jpg&quot; alt=&quot;&quot;&gt;Over the past several months the Software Engineering Daily podcast has entered my regular listening list. I can’t remember where I discovered it, but I was amazed at the frequency at which new episodes were released and the breadth of topics. Since episodes come out every weekday there’s always more than enough content to listen to. I’ve updated &lt;a href=&quot;https://jonsimpson.ca/my-top-tech-software-and-comedy-podcast-list/&quot;&gt;My Top Tech, Software and Comedy Podcast List&lt;/a&gt; to include Software Engineering Daily. Here are a few episodes that have stood out:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://softwareengineeringdaily.com/2016/07/06/schedulers-with-adrian-cockcroft/&quot;&gt;Scheduling with Adrian Cockroft&lt;/a&gt; was quite timely as part of my final paper for my undergraduate degree focused on the breadth of topics in scheduling. Adrian discussed many of the principles of scheduling and related them to how they were applied at Netflix and earlier companies. Scheduling is really a necessity for software developers to know as scheduling occurs in all layers of the software and hardware stack.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://softwareengineeringdaily.com/2016/10/05/developer-roles-with-dave-curry-and-fred-george/&quot;&gt;Developer Roles with Dave Curry and Fred George&lt;/a&gt; was very entertaining and informative as it presented the idea of “Developer Anarchy”, a different structure to running, (or not running), development teams. Instead of hiring Project Managers, Quality Assurance, or DBAs to fill a specific niche of a development team, you mainly hire programmers and leave them to perform all of those tasks according to what they deem is necessary.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://softwareengineeringdaily.com/2017/02/13/infrastructure-with-datanauts-chris-wahl-and-ethan-banks/&quot;&gt;Infrastructure with Datanauts’ Chris Wahl and Ethan Banks&lt;/a&gt; entertained as much as it informed. This episode had a more casual setting as the hosts told stories and brought years of experience to the current and future direction of infrastructure in all layers of the stack. Comparing the current success of Kubernetes to the not-so-promising OpenStack was quite informative as it showed that multiple supporting organizations drove the OpenStack project to have different priorities and visions, whereas Google, being the single organization to drive Kubernetes, is shown to have one single, unified vision.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;EDIT 2017-02-26 – Add Datanauts episode&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title>Better Cilk Development With Docker</title><link>https://jonsimpson.ca/better-cilk-development-with-docker/</link><guid isPermaLink="true">https://jonsimpson.ca/better-cilk-development-with-docker/</guid><description>Better Cilk Development With Docker</description><pubDate>Thu, 06 Oct 2016 20:41:29 GMT</pubDate><content:encoded>&lt;p&gt;I’m taking a course that focuses on parallel and distributed computing. We use a compiler extension for GCC called Cilk to develop parallel programs in C/C++. Cilk offers developers a simple method for developing parallel code, and as a plus it now comes included in GCC since version 4.9.&lt;/p&gt;
&lt;p&gt;The unjust thing with this course is that the professor provides a hefty 4GB Ubuntu virtual machine just for running the GNU compiler with Cilk. No sane person would download an entire virtual machine image just to run a compiler.&lt;/p&gt;
&lt;p&gt;Docker comes to the rescue. It couldn’t be more space effective and convenient to use Cilk from a Docker container. I’ve created a simple Dockerfile containing the latest GNU compiler for Ubuntu 16.04. Here are some Gists showing how to build and run a Dockerfile which contain the dependencies needed to build and run Cilk programs.&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/140c584ecd01f48209199ef5b3c023d6.js&quot;&gt;&lt;/script&gt;</content:encoded></item><item><title>Twenty-Two</title><link>https://jonsimpson.ca/twenty-two/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-two/</guid><description>Twenty-Two</description><pubDate>Sat, 01 Oct 2016 04:05:16 GMT</pubDate><content:encoded>&lt;p&gt;Another grand year has gone by since my last birthday. Here’s my look back on the year as I turn 22 today. I’m on the edge of finishing my Computer Science degree (yay!), had a blast spending time vacationing with my family, and succeeded at completing a few personal goals, to name a few.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/10/champlain-point-lookout.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;The view from above! – At Champlain Point Lookout in Gatineau Park&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;I have accomplished my goal of biking to Gatineau’s Champlain Lookout – a hefty 60km ride on a franken-road bike that safely got me there. Getting hooked on the Strava app has helped gamify my cycling fitness. When my Grandmother passed away this summer we spent five days in Owen Sound, a quiet town on Georgian Bay. I had the itch to go biking one day and ended up renting a proper road bike compared to what I have. Two hours later and I’ve cycled 42km, swam in the water and enjoyed the scenery pass by, completely satisfied.&lt;/p&gt;
&lt;p&gt;This time last year ZDirect, the previous name of the company I work at, got acquired by TravelClick. Over the year we’ve hit home run after home run at delivering on making changes, namely a successful datacentre migration, rebranding of the UI, new profile screen and plenty of features to boot. We’re expanding rapidly: acquiring new office space, hiring more developers, account managers and support people. An incremental integration into TravelClick has been happening, involving processes, infrastructure and software. The sales figures show we’re doing something right. Not a few weeks from now I’ll be in New York for a short vacation. During this time I’ll stop by TravelClick headquarters in Times Square to say Hi and see if I can grab some swag.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/10/work.jpg&quot; alt=&quot;Celebrating our hard work with a barbeque &quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Celebrating our hard work with a barbeque&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;My main achievement during this summer of working at TravelClick has been implementing a weekly Continuous Improvement meeting for my team where we improve our processes and software by discussing and planning items of action. Seeing the entire team engage and drive the discussion, including planning on what should be done is the truest sign of having succeeded, not to mention the improvements that are being made.&lt;/p&gt;
&lt;p&gt;I have started listening to a lot more software-related podcasts. Learning FTW! It’s crazy the amount of information you can learn just by listening when you’re not doing any brain-intensive tasks.&lt;/p&gt;
&lt;p&gt;Continuing to experiment with new recipes, I have begun making sourdough bread as it’s much healthier than your regular white loaf. Vegetables and fruits have become more prominent in my meals, as well as Thai coconut soups every once in a while. I’ve reduced the number of unhealthy foods I eat such as bucket-loads of homemade pizza and frozen foods like perogies.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/10/christmas-vacation.jpg&quot; alt=&quot;christmas-vacation&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;No better way to surprise my aunt than with a diorama&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;On the topic of health, I was disciplined for the first half of the year when it came to performing workouts five to six nights a week. Freeletics is an excellent social exercise app that only requires your body, a pull-up bar and a small amount of space to do short but intense exercises. Besides the pre-programmed workouts, performing 250 sit-ups a night definitely helped get that six-pack ready for the February trip to the Dominican Republic. (On a related note, Long Island Iced Teas became my new favourite drink).&lt;/p&gt;
&lt;p&gt;When the summer started it was an abrupt transition from school-life to work-life. I eventually stopped using the Freeletics workout app and instead went for hour-plus bike rides a couple of times a week, using Strava to track my distance. I miss the disciplined workout. I want to get back into the routine again when my joints aren’t complaining.&lt;/p&gt;
&lt;p&gt;Having a second (but small) source of income as well as keeping my other skills sharp was prominent this past year. I performed some freelance logo design and web development with a great friend of mine. At the moment I feel like the work I’m doing is valued at less than what it should be. I plan on being actionable for future jobs by valuing my time more.&lt;/p&gt;
&lt;p&gt;Here’s some raw metrics that represent part of what I have been up to over the past year:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://whatpulse.org/jonniesweb&quot;&gt;3.1 Million keys typed, 0.7 Million clicks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/jonniesweb&quot;&gt;279 Github contributions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.goodreads.com/review/stats/9250679&quot;&gt;5 books read with 14 on the go&lt;/a&gt; (I should really finish some of these)&lt;/li&gt;
&lt;li&gt;8 blog posts were published and 6 are a work in progress&lt;/li&gt;
&lt;li&gt;569 km of recorded cycling&lt;/li&gt;
&lt;li&gt;57 km of recorded running&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A bad practice that I want to take control over is the number of hours of YouTube videos that I consume per week. I could easily spend that time reading, sleeping or getting things done. I feel that if I visualize my video consumption and set a goal of reducing the hours watched per week I will gain back valuable time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/10/dominican-pose.jpg&quot; alt=&quot;Posing in the Dominican&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Posing in the Dominican&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Some good practices that I want to continue with this year is getting good sleep, writing every day, practicing mindfulness, taking notes while reading books, being fit and listening to podcasts and conference talks.&lt;/p&gt;
&lt;p&gt;Reading about and using the &lt;a href=&quot;http://gettingthingsdone.com/&quot;&gt;Getting Things Done method&lt;/a&gt; is another goal of mine that I think would help me perform better and achieve more given all my work and personal tasks. Being able to organize better and be disciplined about getting things done will enable me to feel more fulfilled day-to-day.&lt;/p&gt;
&lt;p&gt;Well time to publish this and pack it up for the night as it’s just past midnight. Today my friends and I are attending the Panda Games, an annual football game with a decades-old rivalry between Ottawa’s two big universities: Carleton and Ottawa U. It’s a big party and it’ll be a write-off for all of us.&lt;/p&gt;</content:encoded></item><item><title>Old Habits Die Hard: Copy and Paste</title><link>https://jonsimpson.ca/old-habits-die-hard-copy-and-paste/</link><guid isPermaLink="true">https://jonsimpson.ca/old-habits-die-hard-copy-and-paste/</guid><description>Old Habits Die Hard: Copy and Paste</description><pubDate>Tue, 16 Aug 2016 04:05:33 GMT</pubDate><content:encoded>&lt;p&gt;Copy and paste is bad.&lt;/p&gt;
&lt;p&gt;Every single person who uses a computer learns how to copy and paste.&lt;/p&gt;
&lt;p&gt;Copy and paste is necessary to perform many tasks.&lt;/p&gt;
&lt;p&gt;Old habits die hard.&lt;/p&gt;
&lt;p&gt;Email and Word documents and illegally downloaded movies all expect you to use copy and paste because it’s how you’re supposed to do things: copy this email into that folder, move that paragraph of text into the next chapter, copy those illegally downloaded movies to the external hard drive for safekeeping.&lt;/p&gt;
&lt;p&gt;There’s a time and place for copy and paste, but why resort to it when you do the same task multiple times every day? It’s passable when the situation can’t be made better, or can it?&lt;/p&gt;
&lt;p&gt;Sounds like old habits die hard.&lt;/p&gt;
&lt;p&gt;Sure, copy and paste is quick when you’re good at it, but the time adds up. For example, take the process of navigating into a bunch of files and folders to copy the same five files to a different place. Let’s be earnest here and say it takes a minute of this theoretical person’s time. Based on the work they do, they repeat the same copy and paste job ten more times that day. This time adds up.&lt;/p&gt;
&lt;p&gt;Yes, old habits die hard.&lt;/p&gt;
&lt;p&gt;Humans are excellent at copy and paste. Guess what else is excellent at copy and paste as well? Computers!!! Computers are better than humans in every way possible when it comes to performing repetitive copy and paste tasks. Speed. Accuracy. Longevity. It’s a combination which doesn’t disappoint.&lt;/p&gt;
&lt;p&gt;Luckily where my soliloquy is headed involves people who program computers for a profession: Programmers. Programmers write &lt;em&gt;programs&lt;/em&gt; to make computers do things for humans. Copy and paste is one of them. So why are Programmers still using copy and paste to do things themselves repetitively instead of &lt;em&gt;programming&lt;/em&gt; a computer to do it for them?&lt;/p&gt;
&lt;p&gt;Old habits die hard…&lt;/p&gt;
&lt;p&gt;Let this sink in for a moment…&lt;/p&gt;
&lt;p&gt;Programmers are proficient in telling the computer what to do, namely copy and paste. But they’re still using copy and paste things themselves because they’re really good at it. It’s been a habit since they started using a computer however many decades ago.&lt;/p&gt;
&lt;p&gt;This shocks me, especially in the sense where programmers are paid very well to program computers, but instead they’re spending a chunk of their time performing repetitive copy and paste tasks, not to mention they’re fully qualified to program the computer to do it for them.&lt;/p&gt;
&lt;p&gt;It’s a bad habit of programmers to repetitively copy and paste. Knowing so and continuing to do must involve masochism. Be a better programmer and get the computer to copy and paste for you!&lt;/p&gt;</content:encoded></item><item><title>Implementing Agile Databases with Liquibase</title><link>https://jonsimpson.ca/implementing-agile-databases-with-liquibase/</link><guid isPermaLink="true">https://jonsimpson.ca/implementing-agile-databases-with-liquibase/</guid><description>Implementing Agile Databases with Liquibase</description><pubDate>Mon, 25 Apr 2016 03:09:21 GMT</pubDate><content:encoded>&lt;p&gt;We have an inconvenient problem. Our development databases are all snowflakes – snowflakes in the sense that each developer’s database has been hand updated and maintained at the leisure of the developer so that no two databases are alike.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/03/database-scripts-directory.png&quot; alt=&quot;database-scripts-directory&quot;&gt;We version our database changes into scripts with the creation date included in the name. But that’s where the database script organization and automation ends. There’s nothing to take those scripts and apply it to a local developer’s database. Just plain old copy and pasting to run new scripts. Adding to the pain is that the database scripts don’t go back to day 1 of the database. Instead, the development databases are passed around and copied whenever someone breaks their database and needs a new one or a new employee comes on board and needs to set up their development environment.&lt;/p&gt;
&lt;p&gt;Manually updating our personal development database is problematic. Forgetting to run scripts can result in unknown side effects. Usually we don’t bother updating our database with the latest scripts until we really have to. That happens whenever we launch our app. Once the app starts complaining about missing tables or fields we’re on the hunt searching for the one script out of hundreds that would fix the problem.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/04/liquibase_logo.gif&quot; alt=&quot;liquibase_logo&quot;&gt;As you can see, it is a system that is wasting the productivity of all developers, not to mention the frustration that happens when catching up after being behind for a long time. For a while now we’ve acknowledged that it’s a problem and should be fixed. A few of us looked into the problem and talked about using FlywayDB or Liquibase, but Liquibase seemed to be the best choice for us since it is more feature complete. Since that discussion one of our team members started experimenting with Liquibase and pushed that code to a branch, but it’s remained dormant for a while. I wouldn’t say integrating Liquibase into our development environment was abandoned because it was tough to do, rather I’m realizing that it is a common trend for developer tooling and continuous improvement to make way for feature development, bug fixing and level 3 support. Maybe our development team is just too small and busy to tackle these extra tasks or our bosses don’t realize the productivity sinkholes as significant and don’t allocate any time for improving it. I would like to spur some discussion around this.&lt;/p&gt;
&lt;p&gt;Anyways, on with the rest of the post.&lt;/p&gt;
&lt;h2 id=&quot;look-the-proof-of-concept-is-working&quot;&gt;Look! The Proof of Concept is Working!&lt;/h2&gt;
&lt;p&gt;I spent the greater part of my Good Friday working on getting Liquibase working with our app. Partway through the day I got the production database schema into the Liquibase xml format and checked into source control. A few more hours were put into fixing minor SQL vs. MySQL issues with Liquibase’s import. (Who knew the BIT(1) type could have an auto increment? Liquibase disagrees).&lt;/p&gt;
&lt;p&gt;Some time was spent creating a script at &lt;em&gt;script/db&lt;/em&gt; (&lt;a href=&quot;https://github.com/github/scripts-to-rule-them-all&quot;&gt;in the style of GitHub&lt;/a&gt;) for bootstrapping the Liquibase command with the developer database configuration.&lt;/p&gt;
&lt;p&gt;Next I’ll mention some of the incompatibilities that I ran into while generating a Liquibase change log from the existing production database.&lt;/p&gt;
&lt;h3 id=&quot;generating-a-change-log-from-an-existing-database&quot;&gt;Generating a Change Log From an Existing Database&lt;/h3&gt;
&lt;p&gt;Liquibase offers a very helpful feature: being able to take an existing database schema and turn it into an xml change log that it can work with. The Liquibase website &lt;a href=&quot;http://www.liquibase.org/documentation/existing_project.html&quot;&gt;has documentation on the topic&lt;/a&gt;, but it doesn’t mention the slight incompatibilities that you may run into, particularly with data types.&lt;/p&gt;
&lt;p&gt;Once the production database schema was converted into a liquibase change log, I pointed Liquibase to a fresh MySQL server running locally. Running the &lt;em&gt;validate&lt;/em&gt; and &lt;em&gt;update&lt;/em&gt; commands on the change log resulted in some SQL errors when executing. All of them were related to data type conversions. These problems were fixed by modifying the change log xml file manually.&lt;/p&gt;
&lt;p&gt;The first issue was that the &lt;em&gt;NOW()&lt;/em&gt; function wasn’t being recognized. Simple enough, just replace it with &lt;em&gt;CURRENT_TIMESTAMP&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Next was Liquibase turning all of the &lt;em&gt;timestamp&lt;/em&gt; data types into &lt;em&gt;TIMESTAMP(19)&lt;/em&gt;. Doing a search and replace for &lt;em&gt;TIMESTAMP(19)&lt;/em&gt; to &lt;em&gt;TIMESTAMP&lt;/em&gt; did the trick.&lt;/p&gt;
&lt;p&gt;The same issue as above happened to all datetime data types. Doing a search and replace for &lt;em&gt;datetime(6)&lt;/em&gt; to &lt;em&gt;datetime&lt;/em&gt; worked as expected.&lt;/p&gt;
&lt;p&gt;In the production database one table had a primary key with the data type of TINYINT(1). When Liquibase read this it converted the data type to BIT. It’s a &lt;a href=&quot;https://liquibase.jira.com/browse/CORE-1260&quot;&gt;known issue&lt;/a&gt; at the moment, but the fix is simple: change the type in the change log to some other data type like &lt;em&gt;TINYINT&lt;/em&gt; (or &lt;em&gt;TINYINT UNSIGNED&lt;/em&gt;). Make sure if this is a primary key that you update the foreign keys in the other tables, otherwise you’ll get errors when the foreign keys get applied.&lt;/p&gt;
&lt;p&gt;This one was the weirdest. In the production database an index existed on a column of type &lt;em&gt;mediumtext&lt;/em&gt; with no explicit length. The index was defined as a &lt;em&gt;FULLTEXT&lt;/em&gt;. When Liquibase would create the database, it would fail when creating this index. &lt;a href=&quot;http://www.sitepoint.com/forums/showthread.php?474666-Creating-an-index-on-mediumtext-field-error&quot;&gt;After some googling&lt;/a&gt; it appears that the &lt;em&gt;FULLTEXT&lt;/em&gt; index requires a length when operating on &lt;em&gt;mediumtext&lt;/em&gt;. In the end, adding a &lt;em&gt;(255)&lt;/em&gt; or however long to your &lt;em&gt;FULLTEXT&lt;/em&gt; index data type fixes it.&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/7a52e2cdb712cd3c7af2.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;Lastly, the tables from the production database were set to use the UTF-8 encoding and the InnoDB engine, but Liquibase doesn’t pick this up. The workaround for this was to append the following to every table definition in the Liquibase change set xml:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/e1e14d4e230987e1ef84.js&quot;&gt;&lt;/script&gt;
&lt;h3 id=&quot;next-steps&quot;&gt;Next Steps&lt;/h3&gt;
&lt;p&gt;Because we provide a multitenancy SaaS offering, we drive a lot of behaviour of our app from the database. Whether it’s per customer feature toggles, a list of selectable fields, or email templates, a lot of data needs to be prepopulated in the database for the app to fully function.&lt;/p&gt;
&lt;p&gt;The next bit of work involved with moving towards an agile database is to find all of the tables that contain data which are needed for the app to function. Liquibase offers methods of loading this data into the database by either &lt;a href=&quot;http://www.liquibase.org/documentation/changes/load_data.html&quot;&gt;loading data from a CSV file&lt;/a&gt; or by &lt;a href=&quot;http://www.liquibase.org/documentation/changes/insert.html&quot;&gt;specifying the data in a change log&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Another important part of the database that needs to be checked in with Liquibase is the triggers and procedures. Liquibase doesn’t automatically extract the triggers and procedures so you’ll have to locate and export them manually.&lt;/p&gt;
&lt;p&gt;Additionally, improving the developer experience by simplifying the number of things they have to do and know eases adoption and can make them more productive. Things like the configuration needed to run Liquibase, creating a new change log from a template and documentation of usage and best practices are all things that can bring a developer up to speed and make their life easier.&lt;/p&gt;
&lt;p&gt;Lastly, there exists a &lt;a href=&quot;https://github.com/liquibase/liquibase-gradle-plugin&quot;&gt;Liquibase plugin for the Gradle build tool&lt;/a&gt; which makes it straightforward to orchestrate Liquibase with your Gradle tasks. This would come in handy when Gradle is used to perform integration and any other form of automated testing in an environment which uses the database. Test data could be loaded in and cleaned up based on the type of testing.&lt;/p&gt;
&lt;h3 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/03/automate-all-the-things.jpg&quot; alt=&quot;automate-all-the-things&quot;&gt;No developer likes to perform repetitive tasks, therefore minimize the pain by automating all the things. Developer tooling can be often overlooked. As a developer do yourself and your colleagues a favour and automate the tedious tasks into oblivion. As a manager, realize the inefficiencies and prioritize fixing it. Attack the tasks that take the most time or would provide the most value if automated, then just start picking at it piece by piece.&lt;/p&gt;
&lt;p&gt;Liquibase was discussed and acknowledged as the solution to our developer database woes. Following through with integrating Liquibase into our developer environment and going a few steps further with making it easy to use leads to more time saved for actual work. Delaying the implementation of the solution results in losing out on the productivity gains that you’re well aware of. Any productivity increase is better for both the developer’s productivity, the developer’s happiness and the business as a whole.&lt;/p&gt;</content:encoded></item><item><title>My Top Tech, Software and Comedy Podcast List</title><link>https://jonsimpson.ca/my-top-tech-software-and-comedy-podcast-list/</link><guid isPermaLink="true">https://jonsimpson.ca/my-top-tech-software-and-comedy-podcast-list/</guid><description>My Top Tech, Software and Comedy Podcast List</description><pubDate>Tue, 22 Mar 2016 17:21:22 GMT</pubDate><content:encoded>&lt;p&gt;Podcasts are an excellent source of entertainment and learning new things. I find that when I’m doing a mindless task like working out or commuting I can actively focus on something more interesting. Being a student at the moment, I have a lot of time going to and from classes, making food, and procrastinating. I fill up as much of that time listening to podcasts since I enjoy keeping up with the latest tech news, learning new skills and having a laugh.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;#The-List&quot;&gt;Click here to jump to the list if you can’t wait.&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&quot;my-podcast-listening-history&quot;&gt;My Podcast Listening History&lt;/h2&gt;
&lt;p&gt;I’ve been a huge listener to podcasts shortly before I got my first iPod (iPod nano 3rd generation, 8GB, turquoise) which was sometime during 2006, I think. Back then I started listening to a lot of the podcasts from the &lt;a href=&quot;http://twit.tv&quot;&gt;TWiT&lt;/a&gt; and &lt;a href=&quot;http://revision3.com/&quot;&gt;Revision3&lt;/a&gt; networks. Here I am now, just over 10 years later and I’m still addicted.&lt;/p&gt;
&lt;p&gt;Having had a twice a week paper route gave me a lot of mindless time that I soon took over by listening to podcasts. I prominently remember delivering papers in my neighbourhood during a cold, Canadian winter night listening to an excellent holiday episode of &lt;a href=&quot;http://majornelson.com&quot;&gt;Major Nelson Radio&lt;/a&gt;. I also remember laughing my ass off to &lt;a href=&quot;http://revision3.com/diggnation&quot;&gt;Diggnation&lt;/a&gt;, hosted by Kevin Rose and Alex Albrecht, where they share the greatest posts from Digg that week.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.grc.com/securitynow.htm&quot;&gt;Security Now!&lt;/a&gt; was a momentous podcast for me. I started listening to it around 2006 when they were at episode 60. Ever since then I’ve been a listener. I can’t thank Steve and Leo enough for their excellent discussion on current security issues and in-depth episodes on various technologies, like their how the internet works series and explanation of the Stuxnet virus. Because I’ve been listening to Security Now! for so long I’ve learned so much about security and the web that I’m practically acing the fourth year computer security class at my University. The valuable knowledge learned will stick around with me forever, already a great asset for my professional career.&lt;/p&gt;
&lt;h2 id=&quot;the-list&quot;&gt;The List&lt;/h2&gt;
&lt;p&gt;Here’s my list of favourite podcasts over the years, all categorized by genre.&lt;/p&gt;
&lt;h3 id=&quot;technology&quot;&gt;Technology&lt;/h3&gt;
&lt;p&gt;Keeping up with the latest in tech is a given when you’re heavily immersed in the ecosystem, less being a computer scientist.&lt;/p&gt;
&lt;h4 id=&quot;security-now&quot;&gt;&lt;a href=&quot;https://www.grc.com/securitynow.htm&quot;&gt;Security Now!&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://imglogo.podbean.com/dir-logo/24272/24272_300x300.jpg?resize=200%2C200&quot; alt=&quot;Security Now Logo&quot;&gt;&lt;/p&gt;
&lt;p&gt;One of the reasons why I’m studying computer science is because of the Security Now! podcast. Every week, Steve and Leo discuss the most interesting and current topics in security. Whether that’s huge corporate hacking, the latest ransomware, IoT security or even various health topics, it’s a polymath of useful information for anyone who’s interested in security.&lt;/p&gt;
&lt;h4 id=&quot;this-week-in-tech&quot;&gt;&lt;a href=&quot;https://twit.tv/shows/this-week-in-tech&quot;&gt;This Week in Tech&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://www.techtvforever.net/external/TWiT_Logo.jpg?resize=200%2C200&quot; alt=&quot;TWiT Logo&quot;&gt;The best source for tech news, This Week in Tech is hosted by Leo Laporte, a hero of mine for creating the TWiT network, and continually educating me. I wouldn’t know half of what I know now if it wasn’t for Leo’s work. Each week the latest and greatest tech news is dissected with a representative panel of tech journalists. It’s very informative to hear experts in the area give their opinion.&lt;/p&gt;
&lt;p&gt;I remember when Twitter was getting big, it was all that TWiT would talk about for weeks on end. It was even expected: “What Twitter news do we have this week?” was saild by Leo almost every episode when Twitter was growing. Those were the days, when Leo was the #1 user on Twitter. Then the masses came and it went to shit. Okay, I still love Twitter. Rant over.&lt;/p&gt;
&lt;h4 id=&quot;hak5&quot;&gt;&lt;a href=&quot;https://hak5.org/&quot;&gt;Hak5&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://www.dafthack.com/_/rsrc/1358392421163/links2/logo-hak5.PNG?resize=199%2C92&quot; alt=&quot;Hak5 Logo&quot;&gt;Basically a technology hacker/DIY/hardware/software show with a lot of original content. I really got interested in Linux and hacking because of it. Just recently I saw that they’re working on quadrocopters. So cool! One of my favourite segments was the usb multibooting using grub. No need to burn multiple CDs for all of your live-boot isos and images, just put them all on one USB stick and give it a shiny menu to choose which one to use. The show has a kick-ass soundtrack and it looks like they’ve expanded to a new studio and are now producing multiple shows. These guys have grown a lot!&lt;/p&gt;
&lt;h4 id=&quot;maximum-pc-no-bs-podcast&quot;&gt;&lt;a href=&quot;http://www.maximumpc.com/podcast/&quot;&gt;Maximum PC No BS Podcast&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/03/nobspodcast.jpg&quot; alt=&quot;&quot;&gt;Almost forgotten, I remembered this one as I was building the Runners Up section. The Maximum PC No BS Podcast could also go under the Comedy section, but Technology suits it better. On that same paper route I had when I was young I listened to this podcast religiously as soon as each episode came out. Gordon Mah Ung and Will Smith were a perfect pair when talking shop about computer hardware, tech news and building computers for the Maximum PC magazine.&lt;/p&gt;
&lt;p&gt;Besides being overly frustrated about certain things, Gordon had a segment called Gordon’s Rant of the Week where he would vent about anything and everything from Star Wars to breaking motherboards to shitty software. Every new year there’s usually a best of Gordon’s rants episode, which is a must listen if you find Gordon’s rants funny.&lt;/p&gt;
&lt;h3 id=&quot;comedy&quot;&gt;Comedy&lt;/h3&gt;
&lt;p&gt;These podcasts are timeless. You can go back and listen to all of them like I’ve done.&lt;/p&gt;
&lt;h4 id=&quot;rooster-teeth-podcast&quot;&gt;&lt;a href=&quot;http://roosterteeth.com/show/rt-podcast&quot;&gt;Rooster Teeth Podcast&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://s3.roosterteeth.com/podcasts/rtpodcast.jpg?resize=200%2C200&quot; alt=&quot;&quot;&gt;One of the funniest podcasts, various members of the Rooster Teeth company talk about ridiculous stories, gaming, current news and Science. They really don’t know much about Science, but the cast always tries to argue it out until someone say’s something so illogical, the cast and crew burst out into laughter. Moments like these are animated into short videos and posted to their YouTube channel as &lt;a href=&quot;http://roosterteeth.com/show/rt-animated-adventures&quot;&gt;Rooster Teeth Animated Adventures&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It was the summer of 2013, between my first and second years of university, I was living back home trying to find work. I landed this landscaping job, being paid directly by the owners of a large Caledon estate, to fix up their property. That landscaping was fun but hard work. I discovered the Rooster Teeth Podcast early into the summer. Each day, I would be listening to maybe five or six episodes in an eight hour day. I blew through the backlog of episodes really fast and ended up listening to them all before the end of the summer.&lt;/p&gt;
&lt;h4 id=&quot;diggnation&quot;&gt;&lt;a href=&quot;http://revision3.com/diggnation&quot;&gt;Diggnation&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://seeklogo.com/images/D/Diggnation-logo-593616AB29-seeklogo.com.gif?resize=200%2C200&quot; alt=&quot;&quot;&gt;What’s the latest crap from Digg, you might ask? Kevin Rose and Alex Albrecht answered this tough question for 340 episodes from 2005 to 2012. The two would discuss the most interesting news bits on whichever sofa they landed on. Often very entertaining, Diggnation had me dying of laughter.&lt;/p&gt;
&lt;h3 id=&quot;software-development&quot;&gt;Software Development&lt;/h3&gt;
&lt;p&gt;Most, if not all of these Software Development podcasts are timeless. A lot of the topics discussed are still useful today. The only real difference is the adoption of the tools and methodologies. I usually look through the list of earlier episodes and listen to the ones that catch my eye. Once you’re hooked on a podcast, it’s not hard to find yourself downloading and listening to everything they have available.&lt;/p&gt;
&lt;h4 id=&quot;the-ship-show&quot;&gt;&lt;a href=&quot;http://theshipshow.com/&quot;&gt;The Ship Show&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://theshipshow.com/content/tss-logo.jpg?resize=200%2C200&quot; alt=&quot;&quot;&gt;Sadly, this podcast just announced they’re ending the show a few days ago, so I’m still in my mourning stage at the moment, but The Ship Show has been a fun and informative source of everything release engineering, DevOps, and build engineering in big and small companies. The panel discusses new tools, methods and philosophies for improving parts of your tech company, often from firsthand experience. What makes this podcast special is that they delve into more of the technical and implementation details, which is great if you’re into that.&lt;/p&gt;
&lt;h4 id=&quot;arrested-devops&quot;&gt;&lt;a href=&quot;http://arresteddevops.com&quot;&gt;Arrested DevOps&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://www.arresteddevops.com//episode/img/2015-in-review.png?resize=200%2C200&amp;#x26;ssl=1&quot; alt=&quot;&quot;&gt;The ADO podcast is made for people who don’t exactly know what this whole DevOps thing is about but would like to know. Matt Straton, the creator of the podcast makes this point often as he has learned DevOps from scratch. Each episode goes into depth on a DevOps related subject, often having guests from the industry who are knowledgeable in the topic to add more value to the discussion. A lot of the topics discussed are higher level than what is offered in The Ship Show, but Arrested DevOps is still as valuable since its important to understand the big picture and ask the big questions. Both Arrested DevOps and The Ship Show are complimentary to each other.&lt;/p&gt;
&lt;h4 id=&quot;software-engineering-radio&quot;&gt;&lt;a href=&quot;http://www.se-radio.net/&quot;&gt;Software Engineering Radio&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;https://www.se-radio.net/wp-content/uploads/2014/10/SERadio-1300x1370.jpg?resize=200%2C211&quot; alt=&quot;&quot;&gt;Sponsored by the IEEE, this podcast offers excellent interviews on a variety of Software Engineering topics. The episodes mainly consist of two or three people discussing a specific topic, whether it’s a technology or methodology. The time is taken to give listeners a good idea about the purpose and it’s usefulness. The interviewer often does their homework before performing the interview and therefore asks well thought out questions. Because the episodes cover such a wide breadth of topics, surfing through the past episodes is a must!&lt;/p&gt;
&lt;h4 id=&quot;software-engineering-daily&quot;&gt;&lt;a href=&quot;https://softwareengineeringdaily.com/&quot;&gt;Software Engineering Daily&lt;/a&gt;&lt;/h4&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/03/se-daily-logo.jpg&quot; alt=&quot;&quot;&gt;If Software Engineering Radio wasn’t good enough, Software Engineering Daily applies the same format and content to a daily schedule. The amazing producer and interviewer Jeff, also a current host on Software Engineering Radio, has amassed hundreds of episodes covering everything from technology, to business, to soft skills – all pertinent to any software engineer. Dozens of hours of content can be queued up for listening just by skimming through the history of episodes.&lt;/p&gt;
&lt;p&gt;I wrote a post on &lt;a href=&quot;https://jonsimpson.ca/market-yourself/&quot;&gt;marketing yourself&lt;/a&gt; from &lt;a href=&quot;http://www.se-radio.net/2015/12/se-radio-episode-245-john-sonmez-on-marketing-yourself-and-managing-your-career/&quot;&gt;episode 243 with John Sonmez&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&quot;runners-up&quot;&gt;Runners up&lt;/h2&gt;
&lt;p&gt;Here’s a few podcasts that I’ve listened to for a long time, but didn’t make the list:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Floss Weekly&lt;/strong&gt; – Randal Schwartz and other hosts interview open source software projects to share what the project is about. Generally pretty interesting, it’s cool to hear what people are doing in subjects that you’re usually not interested in or didn’t know existed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tekzilla&lt;/strong&gt; – Great segments! Veronica Belmont and Patrick Norton were a killer team and shared great tips and tricks to do with technology.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Windows Weekly&lt;/strong&gt; – Paul Thurott had the perfect level of satire as he talked about Windows products that no one uses, like Windows Home Server (I used it, so I can bash it), and things that people use, like Xbox and new Windows operating systems.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This Week in Google&lt;/strong&gt; – Gina Trapani and Jeff Jarvais have excellent discussions about the cloud and everything Google.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mahalo Daily&lt;/strong&gt; – Veronica Belmont was the best in this short daily podcast format!&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Update 2017-01-19: Added Software Engineering Daily&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title>Market Yourself</title><link>https://jonsimpson.ca/market-yourself/</link><guid isPermaLink="true">https://jonsimpson.ca/market-yourself/</guid><description>Market Yourself</description><pubDate>Mon, 25 Jan 2016 19:23:55 GMT</pubDate><content:encoded>&lt;p&gt;Want to score your dream job and go work for Google, Netflix, Amazon, Github, and the like? The big question: “How do I do it?”&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Market yourself, of course!&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id=&quot;what-do-these-companies-want&quot;&gt;What do these companies want?&lt;/h3&gt;
&lt;p&gt;These companies only hire the best of the best. But what does that really mean? You may see some of the following traits on job postings or you may pick them up from the people who already work at these companies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Continuous improvement&lt;/li&gt;
&lt;li&gt;Feedback/data driven&lt;/li&gt;
&lt;li&gt;Solve the toughest/biggest problems&lt;/li&gt;
&lt;li&gt;Extremely collaborative&lt;/li&gt;
&lt;li&gt;Intuitive&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;“I’ve got all of those traits! I should have no problem getting hired”.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Sure, you may be qualified, but how do you stand out from the person beside you who’s as qualified as you are? There are plenty of methods that can make you stand out in the minds of recruiters, hiring managers, and even high level employees of the company.&lt;/p&gt;
&lt;h3 id=&quot;market-yourself&quot;&gt;Market Yourself&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;Marketing is the action of promoting and selling products or services&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Marketing yourself is therefore promoting and selling your skills and providing value to a company.&lt;/p&gt;
&lt;p&gt;Standing out from the crowd involves work! I listened to an excellent podcast from Software Engineering Radio titled &lt;a href=&quot;http://www.se-radio.net/2015/12/se-radio-episode-245-john-sonmez-on-marketing-yourself-and-managing-your-career/&quot;&gt;SE Radio Episode 245: John Sonmez on Marketing Yourself and Managing Your Career&lt;/a&gt;. John provided some helpful methods on standing out from the crowd, getting noticed by the people who can help you get your dream job, and useful tips on managing your career to make it the best it can be.&lt;/p&gt;
&lt;p&gt;A few of the methods mentioned in the podcast are listed below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Get a blog and write a post every week&lt;/li&gt;
&lt;li&gt;Specializing your career&lt;/li&gt;
&lt;li&gt;Attending/presenting at meetups&lt;/li&gt;
&lt;li&gt;Interact with the blogs/social media of developers at the target companies&lt;/li&gt;
&lt;li&gt;Contribute to open-source projects&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The bottom line is to &lt;strong&gt;be passionate about everything you do&lt;/strong&gt;. When you’re giving off that vibe, others can’t help but notice the energy and detail put into everything: blog posts, conversations, presentations… the list is endless. More people remember an enthusiastic person than someone who’s not interested in what they’re talking about. Being detailed in your work is the bare minimum. Expression is key.&lt;/p&gt;
&lt;h4 id=&quot;blogging&quot;&gt;Blogging&lt;/h4&gt;
&lt;p&gt;Writing is one of the best mediums for sharing information. Blogging has the lowest barrier of entry. There is a huge and rich amount of content out there on anything and everything, especially in tech. Ideas are thrown back and forth, constantly iterated, all the while being kept around for others to read years later. One person’s blog post today is another person’s learning opportunity later. Read &lt;a href=&quot;https://sites.google.com/site/steveyegge2/you-should-write-blogs&quot;&gt;Why You Should Write Blogs by Steve Yegge&lt;/a&gt; for a deeper understanding of this powerful medium.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://wordpress.com&quot;&gt;WordPress.com&lt;/a&gt; or &lt;a href=&quot;https://blogger.com&quot;&gt;Blogger&lt;/a&gt; are great services to get going. If you’re more technical, you can use &lt;a href=&quot;https://pages.github.com&quot;&gt;GitHub Pages&lt;/a&gt; or even host your own. At the end of the day it doesn’t matter where it’s hosted or that you have a fancy domain name; all that matters is the content and what the readers get out of it.&lt;/p&gt;
&lt;p&gt;Your blog can be about anything. I like writing about topics surrounding software development, both involving hard and soft skills. Having a prospective company see you go into depth on various topics gives them more confidence that you’re the right person for the job. Blogging also hones in your understanding of the topic you’re writing about and, over time, improves your communication skills.&lt;/p&gt;
&lt;p&gt;There’s only so many people you can talk to. Writing allows you and your ideas to spread without limits.&lt;/p&gt;
&lt;h4 id=&quot;specializing&quot;&gt;Specializing&lt;/h4&gt;
&lt;p&gt;It’s best to be a specialist. Who’s going to receive a larger salary and gain more attention: a Fault-tolerant Docker Microservices Engineer or a Software Developer? Definitely the Fault-tolerant Docker Microservices Engineer.&lt;/p&gt;
&lt;p&gt;Lets say I write a lot about fault-tolerant Docker microservices. To visitors of my website they can see that I know a lot about fault-tolerant Docker microservices. From now on they’ll associate me with being very informative and skilled in things fault-tolerant Docker microservices. Once the viewers have an understanding that I’m skilled in fault-tolerant Docker microservices I can expand to writing a lot about Docker microservices in general. I can then be known by others as the guy who knows a lot about Docker microservices. As time goes on and I become more popular I can expand to all of Docker and be known as the Docker guy.&lt;/p&gt;
&lt;p&gt;Start small and specialize in a certain area. Its easier to stand out that way since you’re up against less and less people. Once people notice that you’re a specialist for subject X, the association will stick. From then on you can use this to your advantage. Say the company you’re wanting to work at is hiring for the job you’re specializing in. If people from the company know that you’re skilled in this subject they can recommend you to the hiring managers. You can use this tactic with some of the others listed here to get recommended by the employees of the company.&lt;/p&gt;
&lt;h4 id=&quot;attending-meetups&quot;&gt;Attending Meetups&lt;/h4&gt;
&lt;p&gt;Going to a meetup and striking up a conversation with other attendees leaves a (hopefully) positive view of you on them. This is pretty much networking. In the podcast, John Sonmez stated that it is important to have a conversation with the individual without bugging them to help you get a job when you first meet. Meeting these people months before actually asking for their help to get a job improves your chances greatly since you’ve already met the person. It’s even better if you offer to help them. Then they’re more than likely to help you out and do whatever to pay you back.&lt;/p&gt;
&lt;p&gt;Performing a talk at a meetup or conference is a great way to get noticed. The meetup or conference does a lot of the marketing for you already! Your name and title of the talk is advertised both at the venue and online. After the event a lot of references to you and the talk are put on the event’s website, YouTube recordings of your talk and news blogs. The viewers are trying to get as much useful information out of your talk. You have to prove that you have useful information. You’ll want to hook your viewers into wanting more information. At the beginning and end of the presentation you can mention your contact information (email, Twitter, GitHub, website..) and even offer a link where the viewer can go to get more information.&lt;/p&gt;
&lt;p&gt;In the presentation, including links to extra materials can send those viewers to your blog, GitHub, or wherever else. Mentioning that you do consulting or are looking for an employer (only if you don’t have one!) puts the word out there since it can’t hurt.&lt;/p&gt;
&lt;h4 id=&quot;other-blogs-and-social-media&quot;&gt;Other Blogs and Social Media&lt;/h4&gt;
&lt;p&gt;Following and having useful conversations on the blogs of employees at the company you’re trying to get into, as well as the blogs of others in the same tech community gets you noticed over time. When a company is in the process of hiring, the employees can vouch for your skills, immediately putting you ahead of other candidates. If things go really well, the company can straight out offer you a position, bypassing the interview, giving you the upper hand at negotiating your salary, benefits and anything else.&lt;/p&gt;
&lt;h4 id=&quot;contribute-to-open-source&quot;&gt;Contribute to Open Source&lt;/h4&gt;
&lt;p&gt;Working on open source projects is one of the new contenders of things that tech companies are looking for in a developer. &lt;a href=&quot;http://www.cnet.com/news/forget-linkedin-companies-turn-to-github-to-find-tech-talent/&quot;&gt;Its been proven by some companies already that they hire based on your GitHub profile&lt;/a&gt;. Contributing to an open source project or two shows that you’re capable of remotely working and good at collaborating with others. It also allows the hiring managers and recruiters to look at the code you write (eg. code style, design), something not shown on a resumé.&lt;/p&gt;
&lt;p&gt;Even better, contributing to an open source project belonging to the company that you’re trying to get a job with looks very impressive when they start looking for candidates. If they can see all the contributions that you’ve made to their open source projects, it’ll look amazing since your skills have already shown business value. The company can more easily see the value you would bring to the software that they develop.&lt;/p&gt;
&lt;p&gt;Working on your own side projects is always beneficial. It shows that you are creative and intuitive at solving your own problems and have the drive to make things better where you see fit.&lt;/p&gt;
&lt;h3 id=&quot;hacking-the-interview&quot;&gt;Hacking the Interview&lt;/h3&gt;
&lt;p&gt;A few tricks were mentioned around hacking the interview. One of them mentioned earlier, is for the company to want to hire you right off the spot, bypassing the interview entirely. This could mean that you’ve received a recommendation from someone or your personal marketing has worked! This situation is pretty hard to get so the next tip is one we can all use to positively affect our interview outcome.&lt;/p&gt;
&lt;p&gt;Another smart trick involves requesting “just five minutes to chat” sometime before the interview. Either over the phone or in person, this conversation allows you to talk freely with the interviewer to find out more about them and the company, including talking about yourself. Once your interview time comes along, the edge is already off since you’ve talked with the hiring manager already. Humans can be biased when it comes to mostly objective tasks such as finding the best candidate for a job. Once you get the interviewer to like you, they work in your favour, often weighing the candidates they like over others.&lt;/p&gt;
&lt;h3 id=&quot;the-book&quot;&gt;The Book&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2016/01/soft-skills-the-software-developers-life-manual-239x300.jpg&quot; alt=&quot;soft-skills-the-software-developers-life-manual&quot;&gt;Mentioned in the podcast, John Sonmez has published a book titled &lt;a href=&quot;http://www.amazon.com/Soft-Skills-software-developers-manual/dp/1617292397/&quot;&gt;Soft Skills: The software developer’s life manual&lt;/a&gt; which sounds like a must read. He raises the point that the chapters are short, allowing for ~5 minute start-to-finish reads. There are 71 chapters in total, organized into seven sections. The section list is as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Career&lt;/li&gt;
&lt;li&gt;Marketing Yourself&lt;/li&gt;
&lt;li&gt;Learning&lt;/li&gt;
&lt;li&gt;Productivity&lt;/li&gt;
&lt;li&gt;Financial&lt;/li&gt;
&lt;li&gt;Fitness&lt;/li&gt;
&lt;li&gt;Spirit&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I haven’t read the book yet, but I definitely will soon. I’m happy that I’m on the right track so far for my personal marketing based on the sections of this book. Besides the book sounding very helpful, you can’t knock the five stars and 178 reviews on Amazon.com!&lt;/p&gt;
&lt;h3 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;It’s not an effortless path. Putting in the time and effort over months or years to market yourself gives big returns in the form of giving you more potential, allowing you to make the most of your career.&lt;/p&gt;
&lt;p&gt;Take charge of your career. Follow your aspirations and make it count! I certainly will.&lt;/p&gt;</content:encoded></item><item><title>Hook, Line and Sinker: How My Early Interests Got Me to Where I Am Today</title><link>https://jonsimpson.ca/hook-line-and-sinker-how-my-early-interests-got-me-to-where-i-am-today/</link><guid isPermaLink="true">https://jonsimpson.ca/hook-line-and-sinker-how-my-early-interests-got-me-to-where-i-am-today/</guid><description>Hook, Line and Sinker: How My Early Interests Got Me to Where I Am Today</description><pubDate>Fri, 15 Jan 2016 20:30:23 GMT</pubDate><content:encoded>&lt;p&gt;Having an exciting software job at a newly-acquired company is opening up so many possibilities, and making possible the projects that I want to accomplish to make things better. Whether its big projects like containerizing our multiple apps for scalability, implementing Continuous Delivery to ship our software faster, or smaller projects like versioning our dependencies for traceability and ease of deployment, or updating to the latest Java version for performance improvements and to use all the new libraries; it’s nonstop fun that upgrades my problem solving skills, improves the lives of our team and customers, and gives me a track record of making positive change.&lt;/p&gt;
&lt;p&gt;After finishing school I can focus more on teaching myself new skills and technologies that I can use and apply during my professional career. Currently I listen to DevOps and software podcasts when I’m traveling to places and I read a few articles about Docker and other technologies when I have free time. My next logical step is to start applying the knowledge I’ve gained both at work and as side projects.&lt;/p&gt;
&lt;p&gt;At least it’s not all bad in the academia life: this fourth year Computer Security class that I’m taking is immensely fun. I’m glad that I have a captivating class this semester. I can give credit to the Security Now podcast for which I’ve listened to around 500 of its 543 episodes as of this writing (read: 10 years of listening!), for giving me the practical knowledge of current security practices and news, diving deep into the details where necessary.&lt;/p&gt;
&lt;p&gt;Dr Anil Somayaji, the professor of the Computer Security course, is an excellent lecturer and a hacker at his roots. His interactive teaching style makes the possibly dry subject of security interesting (if you think of security as dry. Who would?), and his course work is very useful in that it promotes self teaching and helping out others. Each week every student must submit a hacking journal. It consists of the student writing about their adventures, frustrations, successes and failures of hacking on security related things – whether that involves using Metasploit to break into a computer, putting a backdoor into OpenSSH, figuring out how to configure a firewall, etc. The list goes on and on. An online chatroom is used to share resources and chat with other members of the class to figure out hacking problems and interact with the professor. (Other classes should definitely start using this)&lt;/p&gt;
&lt;p&gt;I’m glad to have had the drive to explore and learn when I was young. Throughout my childhood I would spend my time hacking gaming consoles, jailbreaking iPods, experimenting with Linux, and most of all having a website! Not this website, there was a website before jonsimpson.ca. It was jonniesweb.com. I prominently remember creating logos for my website in MS Paint, printing them out and putting it up beside the Canadian flag that was posted in my fifth grade class. I would use MS Front Page 97 to add jokes, pictures, cheat codes, YouTube and Google Video links, games, and referrals to other friend’s Piczo sites. I remember going through a few designs: spaced themed, blue themed, red themed… I even got interested in PHP and used a sweet template. Each iteration improving with content and coding skills.&lt;/p&gt;
&lt;p&gt;Then middle school and high school caught up with me and I stopped updating the website. Sooner or later my dad stopped supporting my hobby, eventually letting the web hosting expire.&lt;/p&gt;
&lt;p&gt;Fast forward a few years and what was once a childhood interest has turned into an education and career choice. Building a website sparked the fire, pursuing a degree gave me the drive, and doing co-op (soon to be full-time) at work has shown me the many different problems to be solved.&lt;/p&gt;
&lt;p&gt;My plan is to work my ass off in all of my classes, finish up my degree and follow my passions, utilizing my knowledge and expressing my solutions at both my job and in my blog. Ultimately trying to build a successful and happy career.&lt;/p&gt;
&lt;p&gt;At the moment I’m just glad that I don’t have a crappy professor.&lt;/p&gt;</content:encoded></item><item><title>Push-button Deployment of a Docker Compose Project</title><link>https://jonsimpson.ca/push-button-deployment-of-a-docker-compose-project/</link><guid isPermaLink="true">https://jonsimpson.ca/push-button-deployment-of-a-docker-compose-project/</guid><description>Push-button Deployment of a Docker Compose Project</description><pubDate>Sat, 21 Nov 2015 19:31:39 GMT</pubDate><content:encoded>&lt;p&gt;I was recently working on figuring out how to automate the deployment of a simple docker compose project. This is a non mission-critical project that consisted of a redis container and a Docker image of Hubot that we’ve built. Here’s the gist of the &lt;em&gt;docker-compose.yml&lt;/em&gt; file:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/0c8858cdda3d7ec6b8e9.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;Whenever a new version of the zdirect/zbot image is updated and published to the registry a deploy script can be run. For example, the script used to automatically deploy a new version of a Docker Compose project is shown here:&lt;/p&gt;
&lt;script src=&quot;https://gist.github.com/jonniesweb/bdc235f619daecdcb69a.js&quot;&gt;&lt;/script&gt;
&lt;p&gt;Yup, that’s all. Its that simple. Behind the curtains, this command pulls the latest version of the image. Since the &lt;em&gt;docker-compose.yml&lt;/em&gt; file doesn’t specify a tag it defaults to &lt;em&gt;latest&lt;/em&gt;. The old container is then removed and a new one started. Any data specified in volumes are safe since its mounted on the host and not inside the container. Obviously a more complicated project would have a more involved deployment, but simpler is better!&lt;/p&gt;
&lt;p&gt;Integrating this deployment script into Rundeck, Jenkins or your build tool of choice is a piece of cake and isn’t covered here, but might be in a future post. This automation allows you to bridge the gap between building your code and running it on your servers, aka the last-mile problem of continuous delivery.&lt;/p&gt;</content:encoded></item><item><title>Acquisition, Docker and Rundeck</title><link>https://jonsimpson.ca/acquisition-docker-and-rundeck/</link><guid isPermaLink="true">https://jonsimpson.ca/acquisition-docker-and-rundeck/</guid><description>Acquisition, Docker and Rundeck</description><pubDate>Thu, 19 Nov 2015 18:29:20 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/static/images/2015/11/travelclick-logo-300x76.jpg&quot; alt=&quot;travelclick-logo&quot;&gt;ZDirect, the wonderful company I work for has been &lt;a href=&quot;http://www.travelclick.com/en/news-events/press-releases/travelclick-acquires-zdirect-leading-hospitality-crm&quot;&gt;acquired by TravelClick&lt;/a&gt;, a billion dollar hospitality solutions company.&lt;/p&gt;
&lt;p&gt;First of all: Woohoo! I can’t be more excited to be around at this time to jump-start my career.&lt;/p&gt;
&lt;p&gt;One of the changes to occur as soon as possible is the consolidation of our datacentre into TravelClick’s. One of our devs recently found out about Docker and got interested about its power (Hallelujah! Finally it’s not just me who’s interested). Later I bring up Rundeck, a solution to organizing our ad-hoc and scheduled jobs that will assist in the move to Docker.&lt;/p&gt;
&lt;h2 id=&quot;docker&quot;&gt;Docker&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2015/11/docker-large-300x268.png&quot; alt=&quot;docker-large&quot;&gt;His plan is to Dockerize everything we have running in the datacentre to make it easier for our applications to be run/deployed/tested/you-name-it. The bosses are fine with that and are backing him up. I’m excited since this is a fun project right up my alley.&lt;/p&gt;
&lt;p&gt;Since I’m working my ass off trying to finish my degree, I’m only in one day of the week to wash bottles and offer some Docker expertise. Last Friday I had a good chat with the dev working on the Docker stuff. We chatted about &lt;a href=&quot;http://kubernetes.io/&quot;&gt;Kubernetes&lt;/a&gt;, &lt;a href=&quot;https://www.docker.com/docker-swarm&quot;&gt;Swarm&lt;/a&gt;, load balancing, storage volumes, &lt;a href=&quot;https://docs.docker.com/registry/&quot;&gt;registries&lt;/a&gt;, cron and the state of our datacentre. It was quite productive since we bounced ideas off of each other. He’s always busy, juggling a hundred things at once so I offered to give him a hand setting up a Docker registry.&lt;/p&gt;
&lt;p&gt;By the end of the day I had a secure Docker registry running on one of our servers with Jenkins building an example project (&lt;a href=&quot;http://jonsimpson.ca/introducing-chatops-to-my-workplace-hubot/&quot;&gt;ZBot, our Hubot based chatroom robot&lt;/a&gt;), and pushing the image to the registry after it is built. An almost complete continuous delivery pipeline. What would make this better is a way to easily deploy the newly created Docker image to a host.&lt;/p&gt;
&lt;h2 id=&quot;rundeck&quot;&gt;Rundeck&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2015/11/rundeck-logo-300x44.png&quot; alt=&quot;rundeck-logo&quot;&gt;&lt;a href=&quot;http://rundeck.org/&quot;&gt;Rundeck is a job scheduler and runbook automation tool.&lt;/a&gt; Aka it makes it easy to define and run tasks on any of the servers in your datacentre from a nice web UI.&lt;/p&gt;
&lt;p&gt;Currently, we have a lot of cron jobs across many servers scheduled to run periodically for integration, backup and maintenance. We also ssh into servers to run various commands for support and operations purposes.&lt;/p&gt;
&lt;p&gt;Here’s the next piece to our puzzle. Rundeck can fit into many of our use-cases. A few of them are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Deployment automation (bridge the gap between Jenkins and the servers)&lt;/li&gt;
&lt;li&gt;Run and monitor scheduled jobs&lt;/li&gt;
&lt;li&gt;Logging and accountability for ad-hoc commands&lt;/li&gt;
&lt;li&gt;Integrate with our chatroom, &lt;a href=&quot;http://jonsimpson.ca/introducing-chatops-to-my-workplace-intro/&quot;&gt;for all the ChatOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Automate more of production&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As we move towards Dockerizing all of our apps, we have to deal with the issue of what we’re going to do with the cron jobs and scheduled tasks that we run. Since we’re ultimately going to move datacentres it makes the most sense to take the cron jobs and turn them into scheduled jobs in Rundeck. That way we can easily manage them from a centralized view. Need to run this scheduled job on another machine? No problem, change where the job runs. Need to rerun the scheduled job again? Just click a button and the scheduled job runs again.&lt;/p&gt;
&lt;p&gt;The developers wear many hats. Dev and ops are two of them. Because we’re jumping from mindset to mindset it makes sense to save time by automating the tedious while trying not to get in the way of others. Rundeck provides the automation and visibility to achieve this speed.&lt;/p&gt;
&lt;p&gt;With the movement of putting all our apps and services into Docker containers, Rundeck will allow us to manage that change while being able to move fast.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;If you’re interested in joining us on this action packed journey, &lt;a href=&quot;http://www.zdirect.com/careers/&quot;&gt;we’re hiring.&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title>Twenty-one!</title><link>https://jonsimpson.ca/twenty-one/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty-one/</guid><description>Twenty-one!</description><pubDate>Fri, 09 Oct 2015 03:16:06 GMT</pubDate><content:encoded>&lt;p&gt;“Why am I not in the US of A right now?” Good question. That’s what my dad asked me when he congratulated me on my 21st birthday this past October 1st.&lt;/p&gt;
&lt;p&gt;Not being in the US this Thursday evening for academic reasons, I spent Wednesday night with my roommates grabbing a few exotic beers at our local pub. The past week has been heavy with assignments and a midterm already, which is why this post has been delayed – but enough with the excuses!&lt;/p&gt;
&lt;p&gt;Some of the achievements or big changes that have occurred since my last birthday have been the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Visited and vacationed in the USA for the first time&lt;/li&gt;
&lt;li&gt;Moved into a new (and nicer) house with my roommates&lt;/li&gt;
&lt;li&gt;Became a happier person through mindfulness&lt;/li&gt;
&lt;li&gt;Expanded my beer and wine taste&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This past year I’ve also been working on a bunch of small projects, some of them I’ve been actively working on, others I haven’t. I’ve &lt;a href=&quot;http://whatpulse.org/jonniesweb&quot;&gt;pressed 3,139,757 keys and 769,532 mouse clicks&lt;/a&gt;, &lt;a href=&quot;https://github.com/jonniesweb&quot;&gt;made 310 contributions to repos on my GitHub&lt;/a&gt;, &lt;a href=&quot;https://www.goodreads.com/review/list/9250679?shelf=read&quot;&gt;read 5 books&lt;/a&gt;, and &lt;a href=&quot;https://www.goodreads.com/review/list/9250679-jonnie-simpson?shelf=currently-reading&quot;&gt;am still reading 6 books&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I’m more than half-way through my Computer Science program at &lt;a href=&quot;http://carleton.ca&quot;&gt;Carleton&lt;/a&gt;. 3.5 years in and I’ve got less than a year and a half to go. It gets more exciting as the topics I’m studying get more advanced and interesting. I’m looking forward to finishing and starting my full-time career.&lt;/p&gt;
&lt;p&gt;I must have read a countless number of articles; pretty much every day covering topics from Docker, continuous delivery, microservices, distributed systems, coding, and programming languages. I’m amazed at how much I’ve read and how I apply it to my work and experiment.&lt;/p&gt;
&lt;p&gt;This year I really want to make drastic improvements to the operations side of my job at ZDirect, learn Go, Ruby and Coffeescript (in that order), become more mindful, and get into shape (because keyboard arms aren’t sexy).&lt;/p&gt;</content:encoded></item><item><title>Protip-ntp</title><link>https://jonsimpson.ca/protip-ntp/</link><guid isPermaLink="true">https://jonsimpson.ca/protip-ntp/</guid><description>Protip-ntp</description><pubDate>Thu, 28 May 2015 01:17:15 GMT</pubDate><content:encoded>&lt;p&gt;Protip: run NTP on all of your servers so that time doesn’t drift back 7 minutes.&lt;/p&gt;</content:encoded></item><item><title>Introducing ChatOps to my Workplace: Hubot</title><link>https://jonsimpson.ca/introducing-chatops-to-my-workplace-hubot/</link><guid isPermaLink="true">https://jonsimpson.ca/introducing-chatops-to-my-workplace-hubot/</guid><description>Introducing ChatOps to my Workplace: Hubot</description><pubDate>Mon, 25 May 2015 03:28:05 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://octodex.github.com/images/hubot.jpg?resize=335%2C335&amp;#x26;ssl=1&quot; alt=&quot;&quot;&gt;Hubot? You may not have heard of it, but its pretty much the workhorse of ChatOps. &lt;a href=&quot;https://hubot.github.com/&quot;&gt;Hubot&lt;/a&gt; is a scriptable chatroom robot. It can integrate with many chat services and comes with a huge community of &lt;a href=&quot;https://github.com/github/hubot-scripts&quot;&gt;plugins&lt;/a&gt; and &lt;a href=&quot;https://github.com/hubot-scripts&quot;&gt;extensions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In the previous post I talk about &lt;a href=&quot;http://jonsimpson.ca/introducing-chatops-to-my-workplace-intro/&quot; title=&quot;Introducing ChatOps to my workplace: Intro&quot;&gt;ChatOps, Slack, and how I plan on introducing it to my workplace&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hubot is an IRC/campfire bot designed to give some character to your team’s channel. It has various commands for inserting photos in your chat, fetching stuff, and, indeed, running pre-configured commands.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;There exists many other chatbots, but Hubot is the most popular. You can get scripts for anything: &lt;a href=&quot;https://github.com/hubot-scripts/hubot-google-images&quot;&gt;showing images&lt;/a&gt;, &lt;a href=&quot;https://github.com/github/hubot-scripts/blob/master/src/scripts/jenkins.coffee&quot;&gt;interacting with Jenkins&lt;/a&gt;, &lt;a href=&quot;https://github.com/hubot-scripts/hubot-shipit&quot;&gt;ship it squirrel&lt;/a&gt;, &lt;a href=&quot;https://github.com/hubot-scripts/hubot-pager-me&quot;&gt;pager-duty&lt;/a&gt;, and &lt;a href=&quot;https://github.com/hubot-scripts/hubot-plusplus&quot;&gt;hubot-plusplus, where the points don’t matter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;All of the plugins are written in &lt;a href=&quot;http://coffeescript.org/&quot;&gt;coffeescript&lt;/a&gt; and follow a simple input/output &lt;a href=&quot;https://github.com/github/hubot/blob/master/docs/scripting.md&quot;&gt;design&lt;/a&gt; using regular expressions. Persistence is included as well, using &lt;a href=&quot;https://github.com/hubot-scripts/hubot-redis-brain&quot;&gt;Redis&lt;/a&gt; as the datastore.&lt;/p&gt;
&lt;p&gt;Writing Hubot scripts can automate tasks while presenting a simple interface to interact with. Writing custom scripts adds useful insights and actions into your business and software.&lt;/p&gt;
&lt;p&gt;So far I’ve written a custom Docker image that contains everything needed to run Hubot, bundled with a bunch of scripts. Everything is kept in a Git repository on our SCM server.&lt;/p&gt;
&lt;p&gt;In the future, I plan on making the Docker images more extensible by separating the configuration from the code, then publishing it to my GitHub.&lt;/p&gt;
&lt;p&gt;I also want to use Docker Compose to define a Redis container as a dependency of the Hubot container. This primarily allows for the Hubot container to be destroyed and rebuilt while the data stays safe in its own container.&lt;/p&gt;</content:encoded></item><item><title>Introducing ChatOps to my Workplace: Intro</title><link>https://jonsimpson.ca/introducing-chatops-to-my-workplace-intro/</link><guid isPermaLink="true">https://jonsimpson.ca/introducing-chatops-to-my-workplace-intro/</guid><description>Introducing ChatOps to my Workplace: Intro</description><pubDate>Sun, 17 May 2015 20:44:12 GMT</pubDate><content:encoded>&lt;h1 id=&quot;what-is-a-chatops&quot;&gt;&lt;img src=&quot;/static/images/2015/05/slack_colour_rgb-300x126.png&quot; alt=&quot;slack_colour_rgb&quot;&gt;What is a ChatOps?&lt;/h1&gt;
&lt;p&gt;Last week my roommate talked about how he was building custom integrations for his Company’s &lt;a href=&quot;http://slack.com&quot;&gt;Slack&lt;/a&gt; service. He was adding commands that would easily allow their client service team to extend the trial periods of their customers and other useful features. He was also in the process of working on being able to deploy their app to amazon via a single command.&lt;/p&gt;
&lt;p&gt;All of these spells one thing: Productivity.&lt;/p&gt;
&lt;p&gt;Trying to keep up with drinking as much of the new enterprise technology kool-aid, an &lt;a href=&quot;https://www.youtube.com/watch?v=NST3u-GjjFw&quot;&gt;excellent talk&lt;/a&gt; (&lt;a href=&quot;https://speakerdeck.com/jnewland/chatops-at-github&quot;&gt;slides&lt;/a&gt;) by Jesse Newland that I keep going back to every once in a while shows the power and benefit behind ChatOps.&lt;/p&gt;
&lt;p&gt;Dissecting this buzzword, the true benefit behind this technology and culture is giving:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Visibility&lt;/strong&gt; – Seeing what other people are doing and show what you’re doing to others to provide visibility and accountability into the operations that are performed. Sure, someone can say that they’re fixing the unresponsive server, but it’s not clear how they’re doing it and if its fixed yet without asking them. With all the operations used to fix such an issue available in a common chat room, its straightforward for others to read what was done in real time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Learnability&lt;/strong&gt; – Sure, you can have documentation and training to bring people on board, but nothing compares to seeing things done every day, most importantly their first day. In no time, a new employee can get up to speed faster than having to read through a lot of boring documentation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pairing&lt;/strong&gt; – Allowing two or more people to solve a problem together. Much like pair programming, it is the practice of having more than one brain working on a problem or even passively observing. Pairing allows for more scenarios to be explored and better reasoning.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt; – Simplifying tasks to the extreme. No need to log into a server, find and run a script. Just have a command in the chat room that does it. The command acts as a facade, presenting a clean api into your business activities. Being able to automate to the level of not having to switch out of the chat room is an amazing improvement in productivity.&lt;/p&gt;
&lt;h1 id=&quot;sneaky-implementation&quot;&gt;Sneaky Implementation&lt;/h1&gt;
&lt;p&gt;On my self assigned task to improve productivity and culture at my workplace, many other developers and I are pinned in a rut of completing support tasks and fixing bugs for our software. Between tasks I’ve set up Slack and have been integrating our services into it. My goal is to ultimately reveal it slowly to other developers, gaining momentum until critical mass is reached, where the majority of my colleagues praise Slack and the power of its usefulness until it becomes the de facto messaging and team communication tool.&lt;/p&gt;
&lt;p&gt;So far I have integrated our build server, Jenkins, and the source code repository, Gitlab, into Slack. Both of those are only dumb endpoints which dump data into Slack, but its already super interesting to have the visibility of just these services together. &lt;a href=&quot;https://www.nagios.org/&quot;&gt;Nagios&lt;/a&gt; checks, and maybe even &lt;a href=&quot;https://hubot.github.com/&quot;&gt;Hubot&lt;/a&gt;, a chat room robot, will come next.&lt;/p&gt;
&lt;p&gt;The stream of messages created when someone pushes code, the code being built, and then the test results reported back. Its taking Continuous Integration and reporting all of the activities to a central location!&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;See the next post following the theme of ChatOps, this time about &lt;a href=&quot;http://jonsimpson.ca/introducing-chatops-to-my-workplace-hubot/&quot; title=&quot;Introducing ChatOps to my Workplace: Hubot&quot;&gt;Hubot, the chatroom robot&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title>FPGA-Friday</title><link>https://jonsimpson.ca/fpga-friday/</link><guid isPermaLink="true">https://jonsimpson.ca/fpga-friday/</guid><description>FPGA-Friday</description><pubDate>Fri, 03 Apr 2015 15:08:57 GMT</pubDate><content:encoded>&lt;p&gt;Joking around with my roommate who works in an embedded computing software and hardware company, we started cracking jokes about the day Friday.&lt;/p&gt;
&lt;p&gt;His form of the common expression TGIF, is FPGA-Friday at his workplace. FPGAs are so commonplace that everyone takes out their FPGAs and geeks out with each other at the end of the day.&lt;/p&gt;
&lt;p&gt;FPGA-Friday just rolls off the tongue.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Photo from &lt;a href=&quot;https://www.flickr.com/photos/raggle/3174336185/&quot;&gt;raggle&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title>Where&apos;s the warm weather? I want to go biking!</title><link>https://jonsimpson.ca/wheres-the-warm-weather-i-want-to-go-biking/</link><guid isPermaLink="true">https://jonsimpson.ca/wheres-the-warm-weather-i-want-to-go-biking/</guid><description>Where&apos;s the warm weather? I want to go biking!</description><pubDate>Thu, 19 Mar 2015 14:19:00 GMT</pubDate><content:encoded>&lt;p&gt;Getting distracted from working on school things, I stumble upon &lt;a href=&quot;http://gobiking.ca&quot;&gt;GoBiking.ca&lt;/a&gt; and remember the promise I made to myself last summer to explore the Ottawa-Gatineau region on my bike.&lt;/p&gt;
&lt;p&gt;Last summer I went on two long trips. One was from The Hogs Back area into Gatineau Park’s Lac Meech. The other trip was from Hogs Back to Britannia Park.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2015/03/IMG_20140726_172225-1024x768.jpg&quot; alt=&quot;IMG_20140726_172225&quot;&gt;&lt;/p&gt;&lt;figcaption&gt;Stone balancing along the Ottawa river&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Over those two trips I had my beater Super-cycle generic mountain bike. Since that got stolen, I ended up getting a nice road bike. The weight difference and amount of speed you can get up to effortlessly has changed me for the better. Unfortunately I never went on a nice long trip with this new bike yet, only commuting the 7 kilometres to work every day, which was fine, but I’m regretting it now.&lt;/p&gt;
&lt;p&gt;That regret is about to get flipped this year! When the nice warm spring weather comes, I’m immediately hopping on my bike and heading over to Timbuktu. Okay, maybe I’ll coast up the east side of Ottawa river and back, since I haven’t done that route yet.&lt;/p&gt;
&lt;p&gt;I do plan on heading back into Gatineau park this year to do more sightseeing and to see if I can reach Lac Philippe. Me and my friends were considering camping up on Lac Philippe one of the weekends, but we never got around to planning it out. I recently head that the park shuts down the roads to cars on Sunday which is awesome! No fear of cars sneaking up on you.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/images/2015/03/IMG_20140608_181204.jpg&quot;&gt;&lt;img src=&quot;/static/images/2015/03/IMG_20140608_181204-1024x768.jpg&quot; alt=&quot;IMG_20140608_181204&quot;&gt;&lt;/a&gt;&lt;/p&gt;&lt;figcaption&gt;The Lac Pink lookout point provides a good view&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Just out of curiosity I looked up the biking directions from Ottawa, to my cottage in Gravenhurst, Ontario. A good 440 kilometres. Practically a 2 or 3 day bike. One time my family and I drove that route. I remember it being immensely beautiful with the fog rolling in among the hills. I’d definitely take a sports car along that route, but this is a post about bicycling, not one about cars. This trip is definitely out of my skill range, but a boy can dream, can’t he?&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/static/images/2015/03/IMG_20140604_195233.jpg&quot;&gt;&lt;img src=&quot;/static/images/2015/03/IMG_20140604_195233-1024x768.jpg&quot; alt=&quot;IMG_20140604_195233&quot;&gt;&lt;/a&gt;&lt;/p&gt;&lt;figcaption&gt;A female deer along the west end of the Ottawa bike paths&lt;/figcaption&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Some maintenance I definitely have to do to my bike this season is to get better tires and brakes. I managed to pop both the front and back tires multiple times last year hitting potholes and sewer drains. It might just be me not pumping up the tires enough. The brake pads can definitely be replaced. If I remember correctly, only one set of brakes works fully. The other just slows the bike to a stop.&lt;/p&gt;
&lt;p&gt;I can’t wait for the warm weather to come. I’m setting a goal for myself to get out and ride around the Ottawa-Gatineau region more than last year!&lt;/p&gt;</content:encoded></item><item><title>Work Reference</title><link>https://jonsimpson.ca/work-reference/</link><guid isPermaLink="true">https://jonsimpson.ca/work-reference/</guid><description>Work Reference</description><pubDate>Mon, 16 Mar 2015 18:22:31 GMT</pubDate><content:encoded>&lt;p&gt;So I asked my boss a little while ago If I could use him as a work reference. Here is his response after he got a call from my landlord:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I just got a call from [landlord name] (not sure how you spell that) for a reference.&lt;/p&gt;
&lt;p&gt;I said that she should only rent out the place if you promised to quit school and come work full time, and that we were paying you way too much money so she should charge a lot of rent.&lt;/p&gt;
&lt;p&gt;I also mentioned that she should under no circumstances ask you how to fix her router.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;My boss is one of a kind.&lt;/p&gt;</content:encoded></item><item><title>Quick Eclipse Tip</title><link>https://jonsimpson.ca/quick-eclipse-tip/</link><guid isPermaLink="true">https://jonsimpson.ca/quick-eclipse-tip/</guid><description>Quick Eclipse Tip</description><pubDate>Wed, 04 Mar 2015 00:57:26 GMT</pubDate><content:encoded>&lt;p&gt;Whenever programming Java, it’s always a good idea to log what the program is doing. Log this error, log that object’s value – it’s a constant occurrence. More often than not, I’m adding this simple line to the fields of every class I write:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;private final Log log = LogFactory.getLog(ThisClass.class);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Not only is this repetitive and time-consuming, it is easily automated in the form of a Template from within Eclipse.&lt;/p&gt;
&lt;h2 id=&quot;the-solution&quot;&gt;The Solution&lt;/h2&gt;
&lt;p&gt;Adding a text expansion in the form of an Eclipse Template, allows for typing &lt;em&gt;log&lt;/em&gt;, press the content assist hotkey, followed by pressing Enter, to automatically insert the Log declaration statement and add the necessary imports. Wow, that was fast, was my initial response. No way am I ever going to type that out manually again.&lt;/p&gt;
&lt;h2 id=&quot;how-to-do-it&quot;&gt;How to do it&lt;/h2&gt;
&lt;p&gt;In Eclipse, navigate to Window -&gt; Preferences. In the tree on the left-hand side, go under Java -&gt; Editor -&gt; Templates. Here is the screen for defining text expansions that will be available when using the editor. Click New, enter “log” or whatever of your choosing as the name to expand from. Select the context drop-down to “Java type members”. Finally copy and paste the following into the Pattern field:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;private final Log log = LogFactory.getLog(${enclosing_type}.class);&lt;/span&gt;&lt;/span&gt;
&lt;span class=&quot;line&quot;&gt;&lt;span&gt;${imp:import(org.apache.commons.logging.Log, org.apache.commons.logging.LogFactory)}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save it, apply changes, and exit the Preferences window. You are all good to go now!&lt;/p&gt;
&lt;h3 id=&quot;note&quot;&gt;Note&lt;/h3&gt;
&lt;p&gt;This assumes that you’re using Apache Commons Logging library for all of your logging tasks. The above template can easily be converted to define your specific logger of choice.&lt;/p&gt;</content:encoded></item><item><title>You know there&apos;s a problem when...</title><link>https://jonsimpson.ca/you-know-theres-a-problem-when/</link><guid isPermaLink="true">https://jonsimpson.ca/you-know-theres-a-problem-when/</guid><description>You know there&apos;s a problem when...</description><pubDate>Mon, 01 Dec 2014 02:37:32 GMT</pubDate><content:encoded>&lt;p&gt;&lt;code&gt;$ java -version&amp;#x3C;br&gt;&amp;#x3C;/br&gt;#&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# A fatal error has been detected by the Java Runtime Environment:&amp;#x3C;br&gt;&amp;#x3C;/br&gt;#&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# SIGSEGV (0xb) at pc=0x00007fd0ee055ef8, pid=2336, tid=140535311697664&amp;#x3C;br&gt;&amp;#x3C;/br&gt;#&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# JRE version: 6.0_45-b06&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.45-b01 mixed mode linux-amd64 compressed oops)&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# Problematic frame:&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# V [libjvm.so+0x7feef8] InterpreterGenerator::generate_normal_entry(bool)+0x518&amp;#x3C;br&gt;&amp;#x3C;/br&gt;#&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# If you would like to submit a bug report, please visit:&amp;#x3C;br&gt;&amp;#x3C;/br&gt;# http://java.sun.com/webapps/bugreport/crash.jsp&amp;#x3C;br&gt;&amp;#x3C;/br&gt;#&amp;#x3C;br&gt;&amp;#x3C;/br&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Fixed this issue by reinstalling the JVM. Virtual machines really don’t like random reboots. Corruption starts to appear everywhere.&lt;/p&gt;</content:encoded></item><item><title>The Future is Bright? (Maybe)</title><link>https://jonsimpson.ca/the-future-is-bright-maybe/</link><guid isPermaLink="true">https://jonsimpson.ca/the-future-is-bright-maybe/</guid><description>The Future is Bright? (Maybe)</description><pubDate>Tue, 11 Nov 2014 04:29:13 GMT</pubDate><content:encoded>&lt;h3 id=&quot;the-story&quot;&gt;The Story&lt;/h3&gt;
&lt;p&gt;Its Saturday night and all I want to do is watch The Amazing Spiderman. Hey, this should be easy, right? Just put the movie on a USB hard drive and plug it into the Xbox or PS3, eh? Wrong. This is unfortunate, but I’ll figure out another way; my desire for Spiderman will not stop at this speedbump.&lt;/p&gt;
&lt;p&gt;My next logical step is to setup a UPnP server on my laptop to stream the movie to the PS3 or, eventually, the Xbox.&lt;/p&gt;
&lt;p&gt;MediaTomb can transcode your media and serve it to any UPnP client. It streams to any PS3 without an issue. A clause to the previously mentioned was some “audio playback issues”. Streaming to a Xbox 360 was unsupported. I was faithful that the Xbox would support the de facto standard of media streaming, UPnP 1.0, but alas, no dice. MediaTomb is out.&lt;/p&gt;
&lt;p&gt;Next up was uShare. uShare is another UPnP server that offers minimal configuration and Xbox streaming support. After setting up uShare and pulling up the shared content list on the Xbox, the Xbox would not, for the life of it, show uShare and let me stream Spiderman.&lt;/p&gt;
&lt;p&gt;At this point I was getting pretty pissed off at these proprietary, closed-source boxes of DRM.&lt;/p&gt;
&lt;p&gt;It was time to give up the fancy streaming technologies and settle on using a USB stick to physically plug into the console and watch Spiderman. Five minutes later the movie is on the stick, plugged into the Xbox and the menus are flying by as I impatiently page my way towards the location of the movie.&lt;/p&gt;
&lt;p&gt;Lo and behold! Here comes another issue. To watch the video which has AAC encoded audio the Xbox has to download a free codec pack to be able to read it. To get the free codec pack I had to be signed into a Xbox Live account. It was not my Xbox, so I had to remember some Xbox Live burner account I created many, many years ago. My Live account has multifactor authentication enabled (good job Microsoft), so it took me a second to understand that I had to create an application specific password for the Xbox, since the Xbox didn’t have the ability to login using multifactor authentication.&lt;/p&gt;
&lt;p&gt;After logging in, the “free” purchase of the AAC-decoder-didn’t go through because I had to have a credit card attached to my account. God damn. The last thing I want to have happen is for someone to steal my credit card info from hacking into some corporations database. I tried a prepaid credit card with no money on it, but that failed, so I then put in my PayPal details and quickly got back to purchasing that FREE codec pack for watching Spiderman.&lt;/p&gt;
&lt;p&gt;Then it finally worked.&lt;/p&gt;
&lt;h3 id=&quot;reflection&quot;&gt;Reflection&lt;/h3&gt;
&lt;p&gt;Where to start. Where to start. Home theatre media devices have only been around for a decade. Being pioneered by hackers trying to get their media onto their TVs, companies soon entered the market. With them they brought brand loyalty and limitations to how you can consume your content. The walls on the walled garden kept getting higher. It is enticing since its simple, but your freedom to purchase and consume content from wherever is limited by what the company thinks is best for maximizing profits.&lt;/p&gt;
&lt;p&gt;This is why open-source is ravaging the proprietary, closed-source software and hardware market. The Android Open Source Project is a perfect example. Over its existence, its taken a huge chunk of the cellphone market away from the closed-source systems of Apple and Microsoft. Android has achieved this by (using Linux first of all) providing an ecosystem where electronics manufacturers and end users can completely customize the experience of their devices by being able to edit the source code, if needed. With Apple and Windows products you get a simple system that you’re expected to like and live with for the extent of its use. If you’re not able to change an aspect of how it works, sorry bud, you’re out of luck. Have fun reverse engineering that code. Android users can and are actively participating in creating and modifying their devices to their hearts content, while reintegrating their changes back into the ecosystem, making the Android platform all the more better for every user since they have the ability to choose.&lt;/p&gt;
&lt;p&gt;Getting back on track, Digital Rights Management, or DRM, in most cases makes it harder for the consumer to enjoy their content. I had this exact problem when trying to stream Spiderman to the PS3. Nonstandard specifications, and forcing a walled garden onto the device offputs the consumer since their content may be stuck inside without any way of getting it out. The general population doesn’t care about this though.&lt;/p&gt;
&lt;p&gt;Android is already making its way to the TV via set-top boxes, and recently the Chromecast, but other open-source projects are making their way as well; Ubuntu TV being one of those. Hopefully the market can flourish and spawn a large community of hackers who contribute and make these open-source projects better.&lt;/p&gt;
&lt;p&gt;I just want to be able to watch Spiderman without jumping through hoops.&lt;/p&gt;</content:encoded></item><item><title>Documentation</title><link>https://jonsimpson.ca/documentation/</link><guid isPermaLink="true">https://jonsimpson.ca/documentation/</guid><description>Documentation</description><pubDate>Fri, 24 Oct 2014 20:02:54 GMT</pubDate><content:encoded>&lt;p&gt;Me: ah, I love the documentation in that script (sarcasm)&lt;/p&gt;
&lt;p&gt;Coworker: usually the documentation is in &lt;em&gt;history | grep send_specific_dates.sh&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Me: (Laughing loudly in the office) haha nice&lt;/p&gt;</content:encoded></item><item><title>Of Recent Events</title><link>https://jonsimpson.ca/of-recent-events/</link><guid isPermaLink="true">https://jonsimpson.ca/of-recent-events/</guid><description>Of Recent Events</description><pubDate>Wed, 22 Oct 2014 19:22:02 GMT</pubDate><content:encoded>&lt;p&gt;&lt;em&gt;I was going to write a post about looking forward to tonight’s Ottawa WordPress meet-up, but as of today’s events I find that there is a more important topic to blog about.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Today’s attack on Parliament and the shooting of Nathan Cirillo at the National War Memorial reminded me of 9/11. When 9/11 happened, the event seemed so surreal. Broadcasted over the TV or on the internet, it was hard to understand the scene for someone who has never walked the streets of New York or understood what this meant in a geo-political sense. I was only a month shy of being 8 years old.&lt;/p&gt;
&lt;p&gt;Back to October 22nd: Never having lived so close to an attack like this, I originally thought this was just a random shooting. As I followed CBC’s live blog though, I started to put things into perspective. This wasn’t just a shooting, this was an attack on the Government of Canada.&lt;/p&gt;
&lt;p&gt;Living in Ottawa and knowing the downtown core area very well has made this event more personal than it would have been if it had occurred in another city. Having walked those streets and visited those landmarks on many occasions, it’s hard to believe something so bad could occur at the same place where I associate safety and good times with.&lt;/p&gt;
&lt;p&gt;Standing at my desk, trying to get work done as I constantly check for the latest twitter updates, I eventually break for lunch quite late in the day: sometime past 2 pm. Hitting up my favourite shawarma place, the day seems even eerier when the restaurant is dead empty. Here I begin to write this post while chowing down on a healthy serving of garlic potatoes.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Interesting Articles&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://ottawacitizen.com/news/national/michael-zehaf-bibeau-journey-to-death-on-parliament-hill&quot;&gt;http://ottawacitizen.com/news/national/michael-zehaf-bibeau-journey-to-death-on-parliament-hill&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title>Twenty</title><link>https://jonsimpson.ca/twenty/</link><guid isPermaLink="true">https://jonsimpson.ca/twenty/</guid><description>Twenty</description><pubDate>Wed, 01 Oct 2014 18:44:43 GMT</pubDate><content:encoded>&lt;p&gt;Goodbye teenage years, hello twenties!&lt;/p&gt;
&lt;p&gt;Gone are the years of figuring out what this whole life thing is about, time to finally take that knowledge and have the time of my life shaping my future into whatever I please.&lt;/p&gt;
&lt;p&gt;Looking back over the year, I’ve accomplished a lot. I’ve landed an awesome co-op job working at a small company solving issues in the hospitality industry, a much busier freelance web design job on the side, and started cooking healthier meals. Not to mention, its been at least a year having moved out of my parents house into my current Ottawa residence with all of my roommates 🙂&lt;/p&gt;
&lt;p&gt;Here’s a few other miscellaneous achievements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Made and barbecued the perfect homemade burger&lt;/li&gt;
&lt;li&gt;Read more fiction, non-fiction and computer science books and articles&lt;/li&gt;
&lt;li&gt;Found my love for Cherry MX blue keyboards&lt;/li&gt;
&lt;li&gt;Commute to work every day via bike&lt;/li&gt;
&lt;li&gt;Expanded my music tastes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some things that I’m looking forward to this year are going back to school after my co-op term to take some advanced third year courses, work on some personal programming projects, and cook more delicious food. But who knows? Much more notable events will definitely occur in the next year. Those unknown events are the ones I’m most looking forward to!&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Just like what &lt;a href=&quot;http://ma.tt&quot;&gt;Matt Mullenweg&lt;/a&gt;, the creator of WordPress has been doing, I like the idea of writing a post on your birthday for reflecting on the past and future year. Here is it’s first form on my blog.&lt;/em&gt;&lt;/p&gt;</content:encoded></item><item><title>A Path To Justice</title><link>https://jonsimpson.ca/a-path-to-justice/</link><guid isPermaLink="true">https://jonsimpson.ca/a-path-to-justice/</guid><description>A Path To Justice</description><pubDate>Fri, 05 Sep 2014 03:26:13 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;/static/images/2014/09/apathtojustice-clip.jpg&quot;&gt;&lt;img src=&quot;/static/images/2014/09/apathtojustice-clip.jpg&quot; alt=&quot;apathtojustice-clip&quot;&gt;&lt;/a&gt;I would like to announce the completion of one of my latest website jobs: &lt;a href=&quot;http://apathtojustice.ca&quot; title=&quot;A Path To Justice&quot;&gt;A Path To Justice&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A Path To Justice is an upcoming documentary based on Humberview Secondary School’s law class that brought forth new evidence to the wrongfully convicted Steven Truscott case. The new-found evidence warranted a retrial, ultimately leading to an official pardon by the Canadian government.&lt;/p&gt;
&lt;p&gt;Over time this website should grow to serve the purpose of complementing the progress of the documentary: from planning, to production, then finally distribution.&lt;/p&gt;
&lt;p&gt;The website may be visited at &lt;a href=&quot;http://apathtojustice.ca&quot;&gt;apathtojustice.ca&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title>Moving a Jenkins Instance From one Server to Another</title><link>https://jonsimpson.ca/moving-a-jenkins-instance-from-one-server-to-another/</link><guid isPermaLink="true">https://jonsimpson.ca/moving-a-jenkins-instance-from-one-server-to-another/</guid><description>Moving a Jenkins Instance From one Server to Another</description><pubDate>Tue, 24 Jun 2014 21:53:45 GMT</pubDate><content:encoded>&lt;p&gt;During my time converting ZDirect’s SVN repo to Git, we decided to move our code from a server in Florida to a server in Ottawa. The same server also hosts our Jenkins build server. To keep bandwidth bills and build latency down we’re moving the Jenkins server over as well.&lt;/p&gt;
&lt;p&gt;Trial and error led me to copying the entire Jenkins directory, which is either at the &lt;code&gt;JENKINS_HOME&lt;/code&gt; environment variable or &lt;code&gt;~/.jenkins&lt;/code&gt; as the default. Launching the Jenkins executable while setting the &lt;code&gt;JENKINS_HOME&lt;/code&gt; environment variable will bring up an almost perfectly configured instance of Jenkins. I say almost because the Jenkins configuration should be looked over for any settings that are wrong for this new system. Some Jenkins configuration options that I had to change was the JDK home, Ant home and the external url of the server. An example launch script looks as follows:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;$ JENKINS_HOME=/path/to/jenkins/home/folder&amp;#x3C;br&gt;&amp;#x3C;/br&gt;$ java -jar jenkins.war&amp;#x3C;br&gt;&amp;#x3C;/br&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The&lt;a href=&quot;https://wiki.jenkins-ci.org/display/JENKINS/Administering+Jenkins&quot;&gt; Jenkins Wiki&lt;/a&gt; outlines how to move specific jobs as well.&lt;/p&gt;</content:encoded></item><item><title>Svn to Git Migration</title><link>https://jonsimpson.ca/svn-to-git-migration/</link><guid isPermaLink="true">https://jonsimpson.ca/svn-to-git-migration/</guid><description>Svn to Git Migration</description><pubDate>Thu, 19 Jun 2014 03:55:07 GMT</pubDate><content:encoded>&lt;p&gt;At my workplace ZDirect, we have a decade old SVN repository hosting about twenty projects and totalling about 13 000 commits. Recently, we’ve decided to switch over to using Git from SVN because of SVN slowly becoming antiquated and its various productivity slowdowns that are not seen in new version control systems.&lt;/p&gt;
&lt;h6 id=&quot;some-immediate-goals&quot;&gt;Some immediate goals&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Speed up the time it takes to clone a repo&lt;/li&gt;
&lt;li&gt;Simple branching and conflict handling&lt;/li&gt;
&lt;li&gt;More code reviews&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That last point about pull requests is actually a feature of the web-based software hosting system. We chose GitLab as our solution, but more on that will come in a later post.&lt;/p&gt;
&lt;h6 id=&quot;some-long-term-goals&quot;&gt;Some long-term goals&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Move towards continuous integration&lt;/li&gt;
&lt;li&gt;Use advanced Git workflows&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Since being the most comfortable with Git, I volunteered myself as the “Migration Lead”, where I coordinated both the technical side and the human side. There is an incredible amount of articles out on the web talking about how company X or average Joe Y moved their SVN codebase to Git. What has really helped me along the way so far is &lt;a href=&quot;https://www.atlassian.com/git&quot; title=&quot;Atlassian Git&quot;&gt;Atlassian’s Git articles and tutorials&lt;/a&gt;; outlining a standard workflow for the process really makes it trivial for anyone else to do the same.&lt;/p&gt;</content:encoded></item><item><title>Hacking Banshee</title><link>https://jonsimpson.ca/hacking-banshee/</link><guid isPermaLink="true">https://jonsimpson.ca/hacking-banshee/</guid><description>Hacking Banshee</description><pubDate>Sun, 02 Feb 2014 08:51:32 GMT</pubDate><content:encoded>&lt;p&gt;Since syncing music to my Samsung Galaxy S3 doesn’t work with Linux (for the most part), I’ve constructed a method of transferring the music over ssh from my laptop to my phone using a cool little program called &lt;a href=&quot;http://www.cis.upenn.edu/~bcpierce/unison/&quot;&gt;Unison&lt;/a&gt;. The solution is flawless and allows two-way syncing.&lt;/p&gt;
&lt;p&gt;One problem, (or challenge depending on how you think about it), is that the playlists that are managed inside of Banshee Media Player on my laptop have to be exported individually and manually to a file when I want to transfer them over to my phone. Having a way to automatically export all of your playlists to some predefined directory would be very helpful for automating my music syncing. After some Googling it seems like no one has solved this problem yet.&lt;/p&gt;
&lt;p&gt;I grabbed the Banshee source and started looking over its files associated with playlist exporting. Bingo! Shortly thereafter I found in the file &lt;code&gt;Banshee.ThickClient - Banshee.Gui.SourceActions.cs&lt;/code&gt; the method &lt;code&gt;OnExportPlaylist()&lt;/code&gt; which has the user interaction for exporting a playlist and the holy grail, the &lt;code&gt;playlist.Save()&lt;/code&gt; method call.&lt;/p&gt;
&lt;p&gt;The next logical step for me would be to figure out whether this functionality can be encapsulated into an extension, or if that’s not possible, a patch. I’ll definitely be following up on this.&lt;/p&gt;</content:encoded></item><item><title>Job Update and a Concurrency Rant</title><link>https://jonsimpson.ca/job-update-and-a-concurrency-rant/</link><guid isPermaLink="true">https://jonsimpson.ca/job-update-and-a-concurrency-rant/</guid><description>Job Update and a Concurrency Rant</description><pubDate>Fri, 24 Jan 2014 18:26:32 GMT</pubDate><content:encoded>&lt;p&gt;Well, it looks like the early bird catches the worm. After my roommates and I took this inconvenient online course introducing us to Carleton Universities Co-op program and multiple presentations and resume workshops later, I’ve finally done it. I’ve landed myself a prestigious 8-month co-op at &lt;a href=&quot;http://zdirect.com&quot;&gt;ZDirect&lt;/a&gt;, a global company that produces hotel marketing automation software. Development is exclusively done in their downtown Ottawa site and sales offices are around the world.&lt;/p&gt;
&lt;p&gt;Since Carleton’s Career and Co-op department uses a web-based job portal by Orbis Communications, and the fact that students like to procrastinate and leave everything to the last day that things are due, the résumé and cover letter uploading process has a flaw where if there are many users trying to concurrently upload their documents, this “document converter” would stall and not let you progress through the application process. My roommates and I started panicking and thinking of ways to fix it. One of my suggestions was to call up Carleton’s Computing Services (CCS) and order them a pizza in exchange for having them reboot the server. Unfortunately after giving CCS a call, they said the servers were managed by the Careers and Co-op department. Dead end there since it was already 9:30 pm and no one would be at the office.&lt;/p&gt;
&lt;p&gt;I decided to compromise and find emails of hiring managers for my top 10 choices. Long story short, I emailed ZDirect that night, had an interview with them the very next day (that went very well), and got a phone call the day after with a job offer.&lt;/p&gt;
&lt;p&gt;Lesson of the day: have a backup plan and design and test your concurrent systems very well!&lt;/p&gt;</content:encoded></item><item><title>Some smooth Jazz</title><link>https://jonsimpson.ca/some-smooth-jazz/</link><guid isPermaLink="true">https://jonsimpson.ca/some-smooth-jazz/</guid><description>Some smooth Jazz</description><pubDate>Thu, 23 Jan 2014 17:28:17 GMT</pubDate><content:encoded>&lt;p&gt;Here’s one of my recent favourite studying and relaxing mixes that I like to listen to: Dave Harrington – Plays Pretty Just For You. The publishers of this sound, &lt;a href=&quot;https://soundcloud.com/otherpeoplerecords&quot;&gt;OTHER PEOPLE&lt;/a&gt;, has some other really excellent chill music.&lt;/p&gt;
&lt;iframe frameborder=&quot;no&quot; height=&quot;166&quot; loading=&quot;lazy&quot; scrolling=&quot;no&quot; src=&quot;https://w.soundcloud.com/player/?url=https%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F113835828&amp;#x26;auto_play=false&amp;#x26;hide_related=false&amp;#x26;visual=false&amp;#x26;show_comments=false&amp;#x26;show_user=false&amp;#x26;show_reposts=false&amp;#x26;color=ff5500&quot; width=&quot;100%&quot;&gt;&lt;/iframe&gt;</content:encoded></item><item><title>Ice Storm Photography</title><link>https://jonsimpson.ca/ice-storm-photography/</link><guid isPermaLink="true">https://jonsimpson.ca/ice-storm-photography/</guid><description>Ice Storm Photography</description><pubDate>Thu, 26 Dec 2013 23:58:46 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-1-of-15.jpg&quot; alt=&quot;Ice Trees (1 of 15)&quot; title=&quot;Ice Trees (1 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-2-of-15.jpg&quot; alt=&quot;Ice Trees (2 of 15)&quot; title=&quot;Ice Trees (2 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-3-of-15.jpg&quot; alt=&quot;Ice Trees (3 of 15)&quot; title=&quot;Ice Trees (3 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-4-of-15.jpg&quot; alt=&quot;Ice Trees (4 of 15)&quot; title=&quot;Ice Trees (4 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-5-of-15.jpg&quot; alt=&quot;Ice Trees (5 of 15)&quot; title=&quot;Ice Trees (5 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-6-of-15.jpg&quot; alt=&quot;Ice Trees (6 of 15)&quot; title=&quot;Ice Trees (6 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-7-of-15.jpg&quot; alt=&quot;Ice Trees (7 of 15)&quot; title=&quot;Ice Trees (7 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-8-of-15.jpg&quot; alt=&quot;Ice Trees (8 of 15)&quot; title=&quot;Ice Trees (8 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-9-of-15.jpg&quot; alt=&quot;Ice Trees (9 of 15)&quot; title=&quot;Ice Trees (9 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-10-of-15.jpg&quot; alt=&quot;Ice Trees (10 of 15)&quot; title=&quot;Ice Trees (10 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-11-of-15.jpg&quot; alt=&quot;Ice Trees (11 of 15)&quot; title=&quot;Ice Trees (11 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-12-of-15.jpg&quot; alt=&quot;Ice Trees (12 of 15)&quot; title=&quot;Ice Trees (12 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-13-of-15.jpg&quot; alt=&quot;Ice Trees (13 of 15)&quot; title=&quot;Ice Trees (13 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-14-of-15.jpg&quot; alt=&quot;Ice Trees (14 of 15)&quot; title=&quot;Ice Trees (14 of 15)&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/12/Ice-Trees-15-of-15.jpg&quot; alt=&quot;Ice Trees (15 of 15)&quot; title=&quot;Ice Trees (15 of 15)&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title>Exam study tip</title><link>https://jonsimpson.ca/exam-study-tip/</link><guid isPermaLink="true">https://jonsimpson.ca/exam-study-tip/</guid><description>Exam study tip</description><pubDate>Thu, 05 Dec 2013 14:16:07 GMT</pubDate><content:encoded>&lt;p&gt;My Abstract Data Structures Professor’s recommendation on studying for the bonus question on the exam:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There will be a bonus question on the exam. A hint? Read all of Wikipedia and all written literature over the past 2000 years, no, better yet, the past 4000 years.&lt;/p&gt;
&lt;/blockquote&gt;</content:encoded></item><item><title>Dawn and Dusk</title><link>https://jonsimpson.ca/dawn-and-dusk/</link><guid isPermaLink="true">https://jonsimpson.ca/dawn-and-dusk/</guid><description>Dawn and Dusk</description><pubDate>Thu, 28 Nov 2013 14:52:38 GMT</pubDate><content:encoded>&lt;p&gt;Snapped some nice dawn and dusk photos on my daily commute to school&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;/static/images/2013/11/wpid-IMG_20131126_151644.jpg&quot; alt=&quot;&quot;&gt;&lt;/p&gt;</content:encoded></item><item><title>A recent essay I read</title><link>https://jonsimpson.ca/a-recent-essay-i-read/</link><guid isPermaLink="true">https://jonsimpson.ca/a-recent-essay-i-read/</guid><description>A recent essay I read</description><pubDate>Tue, 19 Nov 2013 12:18:51 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;Surveillance is the business model of the Internet, after all — and [the NSA] simply got copies for itself.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;After multiple years of listening to Security Now mention Bruce Schneier for his security insights, I’ve finally subscribed to his Crypto-Gram newsletter. This is one of his many thought-provoking essays.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.schneier.com/blog/archives/2013/11/a_fraying_of_th.html&quot;&gt;A Fraying of the Public/Private Surveillance Partnership&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title>Building Eclipse C/C++ projects for 32-bit on a 64-bit system</title><link>https://jonsimpson.ca/building-eclipse-cc-projects-for-32-bit-on-a-64-bit-system/</link><guid isPermaLink="true">https://jonsimpson.ca/building-eclipse-cc-projects-for-32-bit-on-a-64-bit-system/</guid><description>Building Eclipse C/C++ projects for 32-bit on a 64-bit system</description><pubDate>Thu, 07 Nov 2013 00:27:48 GMT</pubDate><content:encoded>&lt;p&gt;In my systems programming course we learn various techniques of the C programming language. On multiple assignments we are provided code that only runs on 32-bit. I run Linux Mint 64-bit on my laptop, using Eclipse for my C/C++ development and the GNU toolchain to compile, which is an issue when the 32-bit C/C++ code is compiled by default for 64-bit. Valgrind is also unhappy since it cannot run the cross compiled 32-bit code (probably many other profiling tools encounter this as well).&lt;/p&gt;
&lt;p&gt;This guide should work flawlessly on Ubuntu and its derivatives since Linux Mint is based upon Ubuntu and uses its packages.&lt;/p&gt;
&lt;p&gt;I solved this problem by adding the &lt;code&gt;-m32&lt;/code&gt; tag to the eclipse compiler and linker settings, and installing the required 32-bit packages for my operating system.&lt;/p&gt;
&lt;h2 id=&quot;method&quot;&gt;Method&lt;/h2&gt;
&lt;h3 id=&quot;in-eclipse&quot;&gt;In eclipse&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Go to &lt;em&gt;Project-&gt;Properties&lt;/em&gt; then click on &lt;em&gt;Settings&lt;/em&gt;, which is under &lt;em&gt;C/C++ Build&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Make sure that the current configuration that is selected is &lt;em&gt;All Configurations&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;Select the &lt;em&gt;Tool Settings&lt;/em&gt; tab, select &lt;em&gt;GCC C Compiler&lt;/em&gt;, if it isn’t selected already, and in the &lt;em&gt;Command&lt;/em&gt; form change &lt;code&gt;gcc&lt;/code&gt; to &lt;code&gt;gcc -m32&lt;/code&gt; to enable 32-bit compiling with GCC&lt;br&gt;
&lt;a href=&quot;/static/images/2013/11/gcc-compiler.png&quot;&gt;&lt;img src=&quot;/static/images/2013/11/gcc-compiler.png&quot; alt=&quot;gcc compiler&quot;&gt;  &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Do the same change of &lt;code&gt;gcc&lt;/code&gt; to &lt;code&gt;gcc -m32&lt;/code&gt; on the &lt;em&gt;GCC C Linker&lt;/em&gt; page, then click OK to save the settings&lt;br&gt;
&lt;a href=&quot;/static/images/2013/11/gcc-linker.png&quot;&gt;&lt;img src=&quot;/static/images/2013/11/gcc-linker.png&quot; alt=&quot;gcc linker&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Make sure to clean the project so that previously created object files compiled with 64-bit GCC are removed, otherwise the build will fail. Go to &lt;em&gt;Project-&gt;Clean…&lt;/em&gt; and either select &lt;em&gt;Clean all projects&lt;/em&gt; or select &lt;em&gt;Clean projects selected below&lt;/em&gt;, then selecting the current project (In my case it is &lt;em&gt;comp2401a3&lt;/em&gt;). Click OK&lt;br&gt;
&lt;a href=&quot;/static/images/2013/11/eclipse-clean-project.png&quot;&gt;&lt;img src=&quot;/static/images/2013/11/eclipse-clean-project.png&quot; alt=&quot;eclipse clean project&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id=&quot;required-packages&quot;&gt;Required Packages&lt;/h3&gt;
&lt;p&gt;Installing the following packages enabled standard C/C++ library calls to work. If other libraries are used within the code, additional 32-bit packages might have to be installed. Installation via Linux Mint, based on Ubuntu, is via the standard &lt;kbd&gt;apt-get install&lt;/kbd&gt; command.&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;apt-get install g++-multilib lib32gcc1 libc6-i386 lib32z1 lib32stdc++6&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;apt-get install lib32asound2 lib32ncurses5 lib32gomp1 lib32z1-dev lib32bz2-dev&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This might also be required if the code still does not compile:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;apt-get install ia32-libs-gtk&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h4 id=&quot;package-for-valgrind&quot;&gt;Package for Valgrind&lt;/h4&gt;
&lt;p&gt;When running Valgrind on the 32-bit code the following error occurred.&lt;br&gt;
&lt;samp&gt;&lt;br&gt;
valgrind: Fatal error at startup: a function redirection&lt;br&gt;
valgrind: which is mandatory for this platform-tool combination&lt;br&gt;
valgrind: cannot be set up. Details of the redirection are:&lt;br&gt;
valgrind:&lt;br&gt;
valgrind: A must-be-redirected function&lt;br&gt;
valgrind: whose name matches the pattern: strlen&lt;br&gt;
valgrind: in an object with soname matching: ld-linux.so.2&lt;br&gt;
valgrind: was not found whilst processing&lt;br&gt;
valgrind: symbols from the object with soname: ld-linux.so.2&lt;br&gt;
valgrind:&lt;br&gt;
valgrind: Possible fixes: (1, short term): install glibc’s debuginfo&lt;br&gt;
valgrind: package on this machine. (2, longer term): ask the packagers&lt;br&gt;
valgrind: for your Linux distribution to please in future ship a non-&lt;br&gt;
valgrind: stripped ld.so (or whatever the dynamic linker .so is called)&lt;br&gt;
valgrind: that exports the above-named function using the standard&lt;br&gt;
valgrind: calling conventions for this platform. The package you need&lt;br&gt;
valgrind: to install for fix (1) is called&lt;br&gt;
valgrind:&lt;br&gt;
valgrind: On Debian, Ubuntu: libc6-dbg&lt;br&gt;
valgrind: On SuSE, openSuSE, Fedora, RHEL: glibc-debuginfo&lt;br&gt;
valgrind:&lt;br&gt;
valgrind: Cannot continue — exiting now. Sorry.&lt;br&gt;
&lt;/samp&gt;&lt;/p&gt;
&lt;p&gt;After adding the suggested package &lt;code&gt;libc6-dbg&lt;/code&gt;, the error still occurred. Installing the 32-bit version of the package fixed the error:&lt;/p&gt;
&lt;pre class=&quot;astro-code github-dark&quot; style=&quot;background-color:#24292e;color:#e1e4e8; overflow-x: auto;&quot; tabindex=&quot;0&quot; data-language=&quot;plaintext&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span&gt;apt-get install libc6-dbg:i386&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;As shown above, it can be troublesome to set up cross compiling of 32-bit C/C++ code on a 64-bit machine, but if the reward of programming in the native operating system is worth more than programming in a virtual machine or on another computer, then it is a worthy cause.&lt;/p&gt;
&lt;hr&gt;
&lt;h3 id=&quot;sources&quot;&gt;Sources&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://sixarm.com/about/ubuntu-apt-get-install-ia32-for-32-bit-on-64-bit.html&quot;&gt;http://sixarm.com/about/ubuntu-apt-get-install-ia32-for-32-bit-on-64-bit.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://askubuntu.com/a/280757&quot;&gt;http://askubuntu.com/a/280757&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title>Devnull as a Service (DaaS)</title><link>https://jonsimpson.ca/devnull-as-a-service-daas/</link><guid isPermaLink="true">https://jonsimpson.ca/devnull-as-a-service-daas/</guid><description>Devnull as a Service (DaaS)</description><pubDate>Wed, 30 Oct 2013 22:41:12 GMT</pubDate><content:encoded>&lt;p&gt;Skimming over Hacker News: &lt;a href=&quot;http://devnull-as-a-service.com/&quot;&gt;http://devnull-as-a-service.com/&lt;/a&gt; made me laugh a lot. Check out the &lt;a href=&quot;https://news.ycombinator.com/item?id=6637480&quot;&gt;comments&lt;/a&gt; too!&lt;/p&gt;
&lt;p&gt;On a related note: &lt;a href=&quot;http://www.ietf.org/rfc/rfc1149.txt&quot;&gt;http://www.ietf.org/rfc/rfc1149.txt&lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title>Hello Again Internet</title><link>https://jonsimpson.ca/hello-again-internet/</link><guid isPermaLink="true">https://jonsimpson.ca/hello-again-internet/</guid><description>Hello Again Internet</description><pubDate>Wed, 30 Oct 2013 21:01:51 GMT</pubDate><content:encoded>&lt;h3 id=&quot;past&quot;&gt;Past&lt;/h3&gt;
&lt;p&gt;This isn’t the first time I’ve had a personal website. The previous one started as a hobby of mine when I was nine and in grade 5. My inner geek found the internet fascinating and I wanted to be on it. The website was designed in the good ol’ FrontPage program by Microsoft. I packed the pages full of cool videos, game cheats, images and links, then shared my website with friends at school when we had computer class. I even went to the extent of designing a poster and putting it up in my classroom and in the computer lab.&lt;/p&gt;
&lt;p&gt;As the years progressed I learned more about CSS, and web standards, allowing me to build a more “sophisticated” website. I emphasize the word “sophisticated” because all it was, was static html pages, barely utilizing a linked stylesheet, and later on, a static php website. In hindsight I couldn’t believe how big of a nerd I was, but more importantly, I had the drive and devotion to learn and apply myself towards a medium that many people of my age at that time didn’t even attempt.&lt;/p&gt;
&lt;p&gt;Unfortunately a combination of me losing interest and not wanting to pay for the web hosting any more allowed the website to slip off of the internet at the beginning of 2009. The source files hopefully still reside on some hard drive in my parents house.&lt;/p&gt;
&lt;p&gt;Even though the content and coding was primitive and limited, it got me interested in coding, bringing me to where I am today: a university student studying computer science.&lt;/p&gt;
&lt;h3 id=&quot;present&quot;&gt;Present&lt;/h3&gt;
&lt;p&gt;After having built up a portfolio of website and visual design work from previous jobs and school work, and the fact that I’ll be competing to be an intern at a company this upcoming summer is why I’m reclaiming a slice of the internet for myself.&lt;/p&gt;
&lt;p&gt;I plan on including my CV and a portion of my portfolio on this website, as well as doing the whole blog thing, providing constructive commentary on topics that interest me, most likely in the spectrum of computer science.&lt;/p&gt;</content:encoded></item></channel></rss>