Hardware Accelerated Plex Transcoding with consumer GPUs in Dell Servers

One of my recent projects was to improve my Plex media streaming experience by figuring out how to install a consumer graphics card into the Dell server I built a few years ago. The main problem I was facing was being able to watch any sort of 4k videos on devices which only viewed content at 1080p, or being away from home with a slow internet connection that couldn’t stream the high bitrates of 4k content. When these situations occurred, the Dell server’s CPU would kick in to transcode the 4k content down into a lower resolution for streaming. Most of the time, the CPU would start struggling to transcode the video at a fast enough rate, causing constant buffering of the video. All 24 of the CPU’s cores would be maxed out, but that’s still not enough for modern video codecs and bitrates.

The solution was simple enough in theory. Get a graphics card, install it into the server, and configure Plex to use it. Should be a pretty quick process, right? It took me a few weeks to source the right parts, some crafty work, and lots of research to make sure I was doing things right, and not blow the server up. More on that last part later. Make sure you have Plex Pass if you’re following along.

Initial graphics card

I had a few graphics cards lying around, so why not try those in the server? Pop the latest one in and discover that it won’t actually speed up video transcoding. It turns out that to be able to support video transcoding, specifically encoding and decoding of video, a pretty recent (in the last several years) graphics card needs to be used. A great reference that everyone uses for graphics card compatibility for Plex transcoding is this Nvidia page. For Nvidia graphics cards, Plex needs the card to support the nvenc (encoding) and nvdec (decoding) functionality for the specific video codecs of your content. I don’t have firsthand experience with AMD graphics cards, but imagine they follow a similar path as I went through.

Most of the video content these days is encoded as either h.264 (AVC) or the newer h.265 (HEVC) codecs. Plex can tell you which codec your videos are using. Not common at all yet, but a successor of h.265 is AV1, which is only supported on some of the latest graphics cards. It will be years before content starts showing up as AV1, and even longer for the mass majority of content defaulting to AV1. We’re still split between mostly h.264 with an increasing amount of content showing up as h.265 now.

Looking up the graphics card that I had in Nvidia’s compatibility matrix, it was clear that it was too old and didn’t support any of the necessary encoding and decoding of h.264 and h.265 videos. Time to find a card that would suffice, and not break the bank.

GTX 1660 Ti

From looking at the compatibility matrix, the Geforce GTX 1660 Ti stood out to me as a good balance of power usage, compatibility for both nvenc and nvdec for h.264 and h.265, and a relatively low price on Facebook Marketplace of $140. This card does require extra power from a 8-pin pci-e power cable, but the Dell server looks to have one of those available. I also looked into an RTX 4060 or newer for future-proofing with AV1 codec support, but they’re still quite pricey even on the aftermarket. GTX 1660 Ti it is. I didn’t even bother looking into buying a Nvidia Quadro graphics card, but some of them do show up for sale on secondhand marketplaces. They should work as well.

Immediately after getting the card I had to figure out how to connect the 8-pin female port on the server’s power supply to the 8-pin female port on the graphics card. Usually in consumer desktop hardware, the power supply would already have a free 8-pin cable available, making this process of plugging the graphics card in very easy. The server didn’t have this extra cable available. It was either buy a male to male 8-pin cable off Amazon or Ebay and wait for it to arrive weeks later, or fudge something together by hand. I chose the latter as this was faster, and how hard could it be, right?

I soon discovered a part of the internet where people were discussing powering GPUs in desktops, powering GPUs in servers, people frying their hardware, similar but incompatible 8-pin power standards, what are these “sense” pins, and more. This quickly brought up the seriousness of what I was getting in to. I don’t want to damage any of my hardware if I use the wrong cable or provide too much power to the video card.

After some reading, I discovered that there’s two very similar looking 8-pin power connectors for hardware inside of computers. There’s the pci-e 8-pin standard which is widely used for graphics cards. It has three wires that run 12v, and the rest are all ground. Then there’s the other 8-pin standard called EPS-12v which looks incredibly similar, but has four 12v wires and four ground wires. It would be bad to connect a pci-e 8-pin port to a EPS-12v port – you’ll damage some hardware that way.

After more reading, I found someone doing something similar in a newer generation Dell server than mine. Their experience and concern brought up the fact that some servers can provide an 8-pin power port for graphics cards, but use a different layout of which pins provide power and which provide ground. There’s no closure to how it went for them, but this gave me the idea to follow in their steps and use a multimeter to inspect what voltages are actually going through each of those pins. This would influence my next steps of whether building a custom cable would work for my use case.

The goal was to determine the voltages coming out of the server’s EPS-12v connector’s pins. I familiarized myself with using a multimeter from a few different articles and videos, and trying out what I’ll be doing on the server with a spare PC, as I didn’t have much experience with electronics at this level. I also didn’t want to break anything in the process. After experimenting with checking the voltages on the spare computer’s 8-pin pci-e cable, I felt confident enough to inspect the server. I ended up with some surprising but useful results.

It turned out that the Dell server has a EPS-12v connector. Written beside it on the circuit board is GPU POWER, leading me to believe it should work for GPUs. When checking the pins, instead of four 12v pins, there was actually three 12v pins – similar to 8-pin pci-e. This was a big warning that plugging anything into this port should be done with lots of consideration as this isn’t actually a EPS-12v power connector! It’s a pci-e 8-pin in disguise.

At this point, it was clear that there was the right number of 12v wires and ground wires for a graphics card to theoretically work in this server. I bought a few pci-e 8-pin cable extenders that would fit into the server’s EPS-12v port and graphic card’s pci-e port. What I needed to do was splice the female-to-male cables into a male-to-male cable, with the 12v and ground pins in the right orientation. This required a lot of patience and triple checking that the right wires were in the right orientation. I only had one shot at this, as failure would potentially fry parts of the server.

Another multimeter trick I picked up was testing the continuity of a wire – basically can electricity flow from one end of the wire to another. This helped with verifying that my splicing of the power cable with butt splice connectors was solid, and that there weren’t any wires somehow crossing each other.

Once I was confident enough with my custom cable creation, it was time to proceed with the riskiest part: install the GPU and its custom power cable into the server. I found an afternoon to take the server offline, remove all the hard drives in case of electrical failure, plug the new cable in, test the voltages again, and plug the graphics card in. I monitored the server’s vitals and boot-up via the iDRAC remote management interface from my laptop. It started up and worked like a charm. As the stress and tension of massive hardware failure departed, it was time to move on to putting the server back together and moving over to the OS configuration side of things.

As we can see in the above image, the graphics card and its power cable are successfully installed. Not pictured is a big plastic piece that covers the RAM and processors to better direct the airflow – it won’t fit anymore with this big of a GPU present. Also note how tall the graphics card is. It’s almost sitting on one CPU’s heatsink, while the top is almost flush with the top of the case. Of course there’s no holes in the case for the GPU’s fans to get air from, but there’s thankfully a small enough gap between the graphics card and the top case to provide enough airflow.

Aside: Dell’s quick release for pci cards sucks

After the successful power test, I needed to move around some pci cards before closing the case. I quickly found out that the quick release mechanism which slides down over the sides of the pci cards was stuck. There’s some small metal bumps that stick out to provide force on the pci cards to keep them well-seated. Well, this just got stuck on the graphics card, and now won’t loosen. The graphics card was now stuck in the pci slot. After some Googling, it doesn’t seem like anyone else has faced this issue, and brute force wasn’t going to bend this metal bump out of the way. The only thing that breaks is your skin on its sharp metal. I ended up ever so carefully bending the server’s pci riser card out of its slot and the graphic’s cards, cutting part of the case to free one side of the graphics card, then pulling the graphics card free. With this now free, I cut off the quick release mechanism so that it would never happen again.

What a piece of crap.

Nvidia drivers

Now that the graphics card was installed, powered up, and the system would boot normally, it was time to get the Nvidia drivers working. Some context about my Plex setup: it runs in a docker container on Ubuntu server. Part of enabling Plex running in a docker container is to install the Nvidia drivers on the Ubuntu host, as well as the nvidia-container-toolkit package too.

I had hoped that installing these drivers would be easy, but one can dream. It’s painful, especially if your server is headless, and you’re avoiding installing xserver, the basis for window managers.

If you’re going through the same process, I recommend giving the ubuntu-drivers tool a try to install your drivers first. It seems well recommended and documented by Ubuntu, though didn’t come preinstalled in my version of Ubuntu. I didn’t have any success with this method, and instead manually installed a bunch of packages recommended by a sleuth of places online. The following is what worked for me.

I followed the instructions on this Nvidia page to get the nvidia-container-toolkit package installed and configured for both docker and containerd support (since I had both installed on my system). Then the following packages installed the drivers, the libs for encoding/decoding, and utils for the nvidia-smi tool.

$ sudo apt install nvidia-dkms-535-server libnvidia-decode-535 libnvidia-encode-535 nvidia-utils-535

A reboot later and the nvidia-smi command, a way to see the status of any Nvidia GPUs on the system, showed that the graphics card was working. If this wasn’t showing anything then there’s likely a problem with the hardware or the packages that were installed.

Configuring Docker

Now that the host can see the graphics card, its time to configure docker and the Plex container to use it. The Plex docs and linked docker-specific docs have a good overview of enabling hardware transcoding.

For the normal docker command, the --gpus all flag is all you need to specify. I use the LinuxServer brand of docker images, and their Plex docs recommend a few different options all specified. Those are --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=all which automatically mounts the GPU and drivers into the container. It seems like the --gpus flag is newer and built in to docker – it might deprecate the --runtime=nvidia method. I’ll stick with what LinuxServer recommends until they change. I use docker-compose to manage Plex for me, so the command line options to run Plex on docker are slightly different and defined in yaml files.

Configuring Plex

At this point, recreating the Plex container should expose the GPU to Plex. Following the Plex docs on enabling hardware transcoding should make it so that any transcoding that Plex needs to do will use the GPU instead of the CPU. In the Transcoder settings section you should also see your graphics card present in the Hardware Transcoding Device dropdown.

Now go and try transcoding a video on your TV or phone. Avoid trying it out with the Plex web UI, there’s a history of transcoding issues that myself and others are facing. You should be able to successfully transcode 4k videos to different resolutions without breaking a sweat. Running the nvidia-smi command on the host should show that a Plex process is using the GPU.

Here’s what I have set for my transcoding settings. Plex has these configuration options well documented. I’ve noticed that the Transcoder Quality option can be set to its highest and still perform completely fine without exhausting my GPU and CPU. I don’t have many concurrent videos being streamed, so I haven’t been able to bottleneck this setup.

Configure Plex to use a ramdisk for temporary transcode files

One of the quick and instantaneous speedups to watching a transcoded video was switching over the Plex temporary transcode directory to be /dev/shm – Linux’s temporary filesystem stored in memory. When set to nothing, it uses the default Plex application support directory. I had to expose /dev/shm to the docker container for this to work, as the container doesn’t come with it by default, and update the setting in the Transcode settings page in Plex. After this was enabled, I immediately noticed it was much quicker to seek forward and backwards in a video being transcoded. Almost as if the video wasn’t being transcoded at all!

Transcoding for Plex web issue

As mentioned earlier, myself and others have had a lot of difficulty watching transcoded videos when using the Plex web UI. Transcoding works perfectly everywhere else. There’s comedically long Reddit and Plex forum posts to help debug this exact issue, and its still going on. I haven’t been able to find a fix, but it’s likely on the Plex web side of things. Hopefully the Plex developers will be able to fix this.

Instead of starting a video, then switching the quality over to a transcoded version, there is a workaround for starting a video at the necessary transcoded resolution. I’ve had this work, but it’s a pain.

GPU load testing

As a fun last thing to do, there’s simple tools out there to stress test your GPU over a period of time. I came across the wonderful gpu-burn project which provides a easy way to stress test your GPU. Someone has created a pretty popular docker image that can easily be pulled to run this command. It can be run via a

docker run --gpus all --rm chrstnhntschl/gpu_burn 120

where 120 is the number of seconds to run the test for. I found that running the test for 10 seconds or less didn’t actually provide enough time for the test to start up and run. Running it for a few minutes or more is best.

In my case, Plex would only take up around 600 MB of GPU memory when transcoding a 4k video, and would take up less than 10% utilization, with power staying around 25w and thermals around 30 celsius.

As I’m running the stress test, I’m keeping an eye on the output of nvidia-smi. It really does stress your GPU to the max. I was seeing most memory being used, 100% utilization, 100% power usage, and thermals around 60 celsius. Not so bad for a consumer GPU jammed into a server with a handmade power cable. After the test I noticed the temperature gracefully decreasing all the way down to 22 celsius – that’s a testament of decent airflow design in the server. I’ll likely never see the GPU being utilized this much, but it’s good to know that the server can handle it.

Conclusion

My use case of having an enterprise-grade Dell server run a consumer-grade GPU for Plex transcoding ended up turning into quite a longer journey than initially thought. I would not have expected some things such as the graphics card getting stuck in the case, or building my own power cable, but there’s some constants in technology such as drivers always being troublesome, or software bugs lurking around the corner.

Time to go enjoy the fruits of my labour.

Building a homelab – a walk through history and investing in new hardware

This is the first post in a series of my experiences while Building a Homelab. The second post focuses on setting up a local DNS server and can be found here.

I’ve had a particular interest in home computers and servers for a long time now. One of my experiences was wiring my childhood home up with CAT-5 ethernet to the rooms with TVs or computers and having them all connected to a 24 port 100 Mbps switch in the crawlspace. This was part of a master plan to provide different computers in the house with internet connection (when WiFi wasn’t as good as it is today), TVs with smart media boxes (think Apple TV, Roku, and the like but 10 years ago), and to tie it all together a home server for serving media storing files.

The magazine Maximum PC was a major source for this inspiration as they had a number of captivating DIY articles for running your own home server, media streaming devices, and home networking. The memory is a bit rough around the edges, but these projects happened around the same time and on my own dollar – all for the satisfaction of having a bleeding edge entertainment system.

Around this time Windows had a product out for a year called Windows Home Server. It was a OS which catered towards consumers and their home needs. Some of the features it had was network file shares for storing files, computer backup and restore, media sharing, and a number of extensions available from the community. I built a $400 box to run this OS and store two hard drives. The network switch in the crawlspace was a perfect place to put this headless server. Over many years this server was successfully used for computer backups, file storage, network bandwidth monitoring, and media serving to a number of PCs and media streaming boxes attached to TVs.

Two of the TVs in the house had these Western Digital TV Live boxes for playing media off of the network. These devices were quite basic at the time where only Youtube, Flickr, and a handful of other services were available – lacking Netflix and the other now popular Internet streaming services. Instead, they were primarily built for streaming media off of the local network – in this case off of the home server file share. My family and I were able to watch movies and TV shows from the comfort of our couch, and on-demand. This was crazy cool at the time as most people were still using physical media (DVD/Blu-ray) and streaming media had not taken off yet. I also vaguely remember hacking one of the boxes to put on a community-built firmware.

Windows Home Server was great at the time since it offered all of this functionality out of the box with simple configuration. I remember playing with BSD-based FreeNAS on old computers and being overwhelmed at all of the extra configuration needed to achieve something that you get out of the box with Windows Home Server. Additionally, the overhead of having to administer FreeNAS while only having a vague knowledge of Linux and BSD at the time wasn’t a selling point.

Now back to current times. I’m in the profession of software development, have been using various Linux distros for personal use on laptops and servers, and would now consider myself a sysadmin enthusiast. Living in my own place, I’ve been using my own Ubuntu-based laptop to run a Plex media server and stream content to my Roku Streaming Stick+ attached to my TV. The laptop’s 1 TB hard drive was filling up. It was also inconvenient to have this laptop constantly on for serving content.

Browsing Reddit, I came across r/homelab, a community of people interested in owning and managing servers for their own fun. Everything from datacenter server hardware to Raspberry PIs, networking, virtualization, operating systems, and applications. This subreddit gave me the idea of purchasing some decommissioned server hardware from eBay. I sat on the idea for a few months. Covid-19 eventually happened and with all my spare time I gave in to buying some hardware.

After a bunch of research on r/homelab about which servers are quiet, energy efficient, extendable, and will last a number of years, I settled on a Dell R520 with 2 x 6 cores at 2.4 Ghz, 48 GB DDR3 RAM, 2 x 1 Gbit NICs and 8 x 3.5″ hard drive bays. I bought a 1 TB SSD as the boot drive and a refurbished 10 TB hard drive for storing data.

The front of the Dell R520, showing the 8 3.5″ drive bays and some of the internals.

Since I intended on running the ZFS filesystem on the data drive, many people gave the heads up that the Host Bus Adaptor (HBA) card (a piece of hardware which connects the SAS/SATA hard drives and SSDs to the motherboard) comes with the default Dell firmware. This default firmware caters towards always running some sort of hardware-based RAID setup, thus hiding the SMART status of all drives. With ZFS, accessing the SMART data for each drive is paramount for data integrity. To get around this limitation with the included HBA card, the homelab community has some unofficial firmware for it which exposes IT mode, basically a way to pass through each drive to the OS – completely bypassing any hardware RAID functionality. Some breath holding later and the HBA card now had the new firmware.

I bought a separate HBA card with the knowledge at the time that the one that comes with the Dell R520 didn’t have any IT mode firmware from the community. I ended up being wrong after a whole lot of investigation. Thankfully I should be able to flash new firmware on this card as well and sell it back on eBay.

A Dell Perc H310 Mini Mono HBA (Host Bus Adaptor) used in Dell servers for interfacing between the motherboard and SAS/SATA drives.

As the hardware was all being figured out, I was also researching and playing with different hypervisors – an operating system made for running multiple operating systems on the same hardware. The homelab community often refers to VMware ESXi, Proxmox VE, and even Unraid. I sampled out the first two, as Unraid didn’t have an ISO available to test with and wasn’t free.

Going through the pain of making a USB stick bootable for an afternoon, I eventually got ESXi installing on the system. Poking around, it was interesting to see that VM storage was handled by having a physical disk formatted to a VMware format specific to storing multiple VMs – vmfs. With the goal of having one of the VMs have full control over a drive formatted with the ZFS filesystem, ESXi provides a feature called hardware passthrough which bypasses virtualization of the physical hardware. One big blocker for myself was the restriction on the free version which limits VMs to a maximum of 8 vCPUs – a waste of resources when having 12 CPUs and not enough VMs to utilize them.

Next, I took a look at Proxmox by loading it up as a VM on ESXi. It was Debian based, which was a plus as I’m comfortable with systemd and Ubuntu systems already. The Proxmox UI appeared like it had quite a few useful features, but didn’t feel like what I needed. I was much more comfortable with the terminal, and these graphical interfaces to manage things felt more like a limitation than a benefit. I could always SSH into Proxmox and manage things there, but there’s always the aspect of learning the intricacies of how this turnkey system was setup. Who knows what was default Debian configured and what was modified by Proxmox. Not to mention, what if Docker or other software was out of date and couldn’t be upgraded? This would be an unnecessary limitation I could avoid if rolling my own.

Lastly, I went back to my roots – Ubuntu Server. I spun up a VM of it on ESXi. Since I’m quite used to the way Ubuntu works it was comfortable knowing what I could do. There were no 8 vCPU limitations with Ubuntu Server as the host OS – I can utilize all of the server’s resources. After some thinking I realized I didn’t have any need to run any VMs at the moment. In the past I’ve managed a number of VMs using QEMU using Ubuntu Server, therefore if the need arises again I can pull it off. The reason why I’m not using any VMs is because I’m using Docker for all of my application needs. I already have a few apps running in Docker containers on my laptop that I’ll eventually transfer over to the server. Next up, ZFS on Linux has been available for a while now in Ubuntu, giving me the confidence that the data drive will be formatted with ZFS without a problem.

The internals of the Dell R520 with the thermal cover removed. Note the row of six fans across the width of the case to keep things cool.

In the end I scrapped the idea of running a hypervisor such as EXSi and running multiple VMs on top of it because my workloads all live in Docker containers instead. Ubuntu Server is more suitable since I am able to configure everything from a SSH console. If I may conjecture why the r/homelab community loves their VMs, it may be because many of the hobbyists are used to using them for their day-jobs. There were a handful of folks who did run their own GUI-less, no-VM setups, but it was the minority.

In the end, Ubuntu Server 20.04 LTS was installed on a 1 TB SSD boot drive. A 10 TB HDD was formatted with ZFS in a single drive configuration. Docker daemon was installed from its official Apt repo, and a number of other non-root processes were installed from Nix and Nixpkgs.

Conclusion

There’s a few more things I want to discuss regarding the home server. Some of those include using Nix and Nixpkgs in a server environment and some of the difficulties, setting up a local DNS server to provide domain name resolution for devices on the network and in Docker containers, a reverse proxy for the webapps running in Docker containers using the Caddy webserver, and some DataDog monitoring.

In the future I have plans to expand the amount of storage while at the same time introducing some redundancy with ZFS RAIDz1, diving into being able to remotely access the local network via VPN or some other secure method, and better monitoring for uptime, ZFS notifications, OS notifications, and the like.

Better Cilk Development With Docker

I’m taking a course that focuses on parallel and distributed computing. We use a compiler extension for GCC called Cilk to develop parallel programs in C/C++. Cilk offers developers a simple method for developing parallel code, and as a plus it now comes included in GCC since version 4.9.

The unjust thing with this course is that the professor provides a hefty 4GB Ubuntu virtual machine just for running the GNU compiler with Cilk. No sane person would download an entire virtual machine image just to run a compiler.

Docker comes to the rescue. It couldn’t be more space effective and convenient to use Cilk from a Docker container. I’ve created a simple Dockerfile containing the latest GNU compiler for Ubuntu 16.04. Here are some Gists showing how to build and run a Dockerfile which contain the dependencies needed to build and run Cilk programs.

Push-button Deployment of a Docker Compose Project

I was recently working on figuring out how to automate the deployment of a simple docker compose project. This is a non mission-critical project that consisted of a redis container and a Docker image of Hubot that we’ve built. Here’s the gist of the docker-compose.yml file:

Whenever a new version of the zdirect/zbot image is updated and published to the registry a deploy script can be run. For example, the script used to automatically deploy a new version of a Docker Compose project is shown here:

Yup, that’s all. Its that simple. Behind the curtains, this command pulls the latest version of the image. Since the docker-compose.yml file doesn’t specify a tag it defaults to latest. The old container is then removed and a new one started. Any data specified in volumes are safe since its mounted on the host and not inside the container. Obviously a more complicated project would have a more involved deployment, but simpler is better!

Integrating this deployment script into Rundeck, Jenkins or your build tool of choice is a piece of cake and isn’t covered here, but might be in a future post. This automation allows you to bridge the gap between building your code and running it on your servers, aka the last-mile problem of continuous delivery.

Acquisition, Docker and Rundeck

travelclick-logoZDirect, the wonderful company I work for has been acquired by TravelClick, a billion dollar hospitality solutions company.

First of all: Woohoo! I can’t be more excited to be around at this time to jump-start my career.

One of the changes to occur as soon as possible is the consolidation of our datacentre into TravelClick’s. One of our devs recently found out about Docker and got interested about its power (Hallelujah! Finally it’s not just me who’s interested). Later I bring up Rundeck, a solution to organizing our ad-hoc and scheduled jobs that will assist in the move to Docker.

Docker

docker-largeHis plan is to Dockerize everything we have running in the datacentre to make it easier for our applications to be run/deployed/tested/you-name-it. The bosses are fine with that and are backing him up. I’m excited since this is a fun project right up my alley.

Since I’m working my ass off trying to finish my degree, I’m only in one day of the week to wash bottles and offer some Docker expertise. Last Friday I had a good chat with the dev working on the Docker stuff. We chatted about Kubernetes, Swarm, load balancing, storage volumes, registries, cron and the state of our datacentre. It was quite productive since we bounced ideas off of each other. He’s always busy, juggling a hundred things at once so I offered to give him a hand setting up a Docker registry.

By the end of the day I had a secure Docker registry running on one of our servers with Jenkins building an example project (ZBot, our Hubot based chatroom robot), and pushing the image to the registry after it is built. An almost complete continuous delivery pipeline. What would make this better is a way to easily deploy the newly created Docker image to a host.

Rundeck

rundeck-logoRundeck is a job scheduler and runbook automation tool. Aka it makes it easy to define and run tasks on any of the servers in your datacentre from a nice web UI.

Currently, we have a lot of cron jobs across many servers scheduled to run periodically for integration, backup and maintenance. We also ssh into servers to run various commands for support and operations purposes.

Here’s the next piece to our puzzle. Rundeck can fit into many of our use-cases. A few of them are as follows:

  • Deployment automation (bridge the gap between Jenkins and the servers)
  • Run and monitor scheduled jobs
  • Logging and accountability for ad-hoc commands
  • Integrate with our chatroom, for all the ChatOps
  • Automate more of production

As we move towards Dockerizing all of our apps, we have to deal with the issue of what we’re going to do with the cron jobs and scheduled tasks that we run. Since we’re ultimately going to move datacentres it makes the most sense to take the cron jobs and turn them into scheduled jobs in Rundeck. That way we can easily manage them from a centralized view. Need to run this scheduled job on another machine? No problem, change where the job runs. Need to rerun the scheduled job again? Just click a button and the scheduled job runs again.

The developers wear many hats. Dev and ops are two of them. Because we’re jumping from mindset to mindset it makes sense to save time by automating the tedious while trying not to get in the way of others. Rundeck provides the automation and visibility to achieve this speed.

With the movement of putting all our apps and services into Docker containers, Rundeck will allow us to manage that change while being able to move fast.


If you’re interested in joining us on this action packed journey, we’re hiring.