Installing Go the Right Way

100% Derpy

It’s a pain to get the latest version of the Go programming language installed on a Debian or Ubuntu system.

Oftentimes if your operating system is not the latest then there is a slim chance that there will be an up to date version of Go available. You may get lucky and find newer versions of Go in some random person’s PPA, where they’ve backported newer versions to older operating systems. This works, but newly released versions are reliant on the package maintainer to update and push it out.

Other options of installing the latest version of Go may involve building the package from source. This method is often tedious and can be error prone with the number of steps involved. Not exactly for the faint of heart.

Command line tools have been built for certain programming languages to streamline the installation of new versions. For Go, GVM is the Go Version Manager. Inspired by RVM, the Ruby Version Manager, GVM makes it quite simple to install multiple versions of Go, and to switch between them with one simple command.

The only downside that GVM has is that it’s not installed via a system package (eg. a deb file). Don’t let that worry you too much though! Installation is as simple as running the following curl-bash, and then using the GVM command to start installing different versions of Go. Here’s the installation guide/readme.

bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)

One confusing point when using GVM to install the latest version of Go resulted in a failed installation. This made no sense. Eventually RTFM’ing resulted in understanding that you first have to install an earlier version of Go to “bootstrap” the installation of any version of Go later than 1.5. Explained here in more detail.

After following their instructions to install Go 1.4 it was now possible to install the latest version of Go and get on with coding!

Private Docker Repositories with Artifactory

A while ago I was looking into what it takes to setup a private Docker Registry. The simplest way involves running the vanilla Docker Registry image and a small amount of configuration (vanilla is used to distinguish the official Docker Registry from the Artifactory Docker Registry offering). The vanilla Docker Registry is great for proof of concepts or for people who want to design a custom solution, but in organizations where there are multiple environments (QA, staging, prod) wired together using a Continuous Delivery pipeline – JFrog Artifactory is well suited for the task.

Artifactory, the fantastic artifact repository for storing your Jars, Gems, and other valuables has an extension to host Docker Repositories to store and manage Docker images as first-class citizens of Artifactory.

Features

Here’s a few compelling features which make Artifactory worthwhile over the vanilla Docker Registry.

Role-based access control

The Docker Registry image doesn’t come with any fine-grained access control. The best that can be done is either allowing or disallowing access to all operations on the registry by the use of a .htpasswd file. In the best scenario, each user of the registry has their own username and password.

Artifactory uses its own fine-grained access control mechanisms to secure the registry – enabling users and groups to be assigned permissions to read, write, deploy, and modify properties. Access can be configured through the Artifactory web UI, REST API, or AD/LDAP.

Transport Layer Security

If enabled, Artifactory will use the same TLS encryption it uses for Docker Registries. Unlike a vanilla Docker Registry, there is no need to setup a reverse proxy to tunnel all insecure HTTP connections over HTTPS. The web UI offers a screen to copy and paste authentication information for connecting to the secured Artifactory Registry.

Data Retention

Artifactory has the option to automatically purge old Docker images when the unique number of tags has grown to a certain size. This keeps the number of available images, and therefore the storage space, within reason. The results of not having old images purged can lead to running out of disk space, or for you cloud users, expensive object storage bills.

Image Promotion

Continuous delivery defines the concept of pipelines. These pipelines represent the flow of commits from when a developer checks in their code to the SCM, all the way through CI, and eventually into production. Continuous Delivery organizations who have chosen to use multiple environments for validating their software changes would “promote” a version of the software from one environment to the next. A version would only be promoted if it passed the validation requirements for that environment.

For example, the promotion of version 123 would first go through the QA environment, then the Staging environment, then the Production environment.

Artifactory includes Docker image promotion as a first-class feature, separating it from the vanilla Docker Registry. What would be a series of manual steps, or a script to run, is now a single API endpoint to promote a Docker image from one registry to another.

Browsing for Images

The Artifactory UI already has the ability to look at various artifacts contained in Maven, NPM, and other types of repositories. It was only natural to offer the same service for Docker Registries. All images of a repository can be listed and searched upon. Images can be further described by showing the various tags and layers that compose it.

The current vanilla Docker Registry doesn’t have a GUI. It is only through third-party projects that a GUI can be provided to offer the same functionality as Artifactory.

Remote Registries

Artifactory has the ability to provide a caching layer for registries. Performance is gained when images and metadata are fetched from the cached Artifactory instance, preventing the time and latency incurred from going to the original registry. Resiliency is also gained since the Artifactory instance can continue serving cached images and metadata to satisfy client requests even when the remote registry has become unavailable. (S3 outage anyone?)

Virtual Registries

Besides hosting local and caching remote registries, virtual registries is a combination of the two. Virtual registries unite images from a number of local and remote registries, enabling Docker clients to conveniently use just a single registry. Administrators are then able to change the backing registries when needed, requiring no change on the client’s side.

This is most useful for humans who need ad hoc access to multiple registries that correspond to multiple environments. For example, the QA, Staging, Production, and Docker Hub registries can be combined together, making it seem like one registry to the user instead of four different instances. Machines running in the Production environment, for example, could only have access to the Production Docker Registry, thereby preventing any accidental usage of unverified images.

Conclusion

Artifactory is a feature rich artifact tool for Maven, NPM, and many other repository types. The addition of Docker Registries to Artifactory provides a simple solution that caters towards organizations who are implementing Continuous Delivery practices.

If you’re outgrowing an existing vanilla Docker Registry, or entirely new to the Docker game then give Artifactory a try for your organization, it won’t disappoint.

Practicing User Safety at GitHub

GitHub explains a few of their guidelines for harassment and abuse prevention when they’re developing new features. Some of the interesting points in the article include a list of privacy-oriented questions to ask yourself when developing a new feature, providing useful audit logs for retrospectives, and minimizing abuse from newly created accounts by restricting access to the service’s capabilities. All of these points taken into consideration make it harder for abuse to occur, making the service a better environment for its users.

See the original article.

A few Gotchas with Shopify API Development

I had a fun weekend with my roommate hacking on the Shopify API and learning the Ruby on Rails framework. Shopify makes it super easy to begin building Shopify Apps for the Shopify App Store – essentially the Apple App Store equivalent for Shopify store owners to add features to their customer facing and backend admin interfaces. Shopify provides two handy Ruby gems to speed up development: shopify_app and shopify_api. An overview of the two gems are given and then their weaknesses are explained.

Shopify provides a handy gem called shopify_app which makes it simple to start developing an app for the Shopify App Store. The gem provides Rails generators to create controllers, add webhooks, configure the basic models and add the required OAuth authentication –  just enough to get started.

The shopify_api gem is a thin wrapper of the Shopify API. shopify_app integrates it into the controllers automatically, making requests for a store’s data very simple.

Frustrations With the API

The process of getting a developer account and developer store created takes no time at all. The API documentation is clear for the most part. Though attempting to develop using the Plus APIs can be frustrating when using the APIs for the first time. For example, querying the Discount API, Gift Card API, Multipass API, or User API results in unhelpful 404 errors.  The development store’s admin interface is misleading as a discounts section can be accessed where discounts may be added and removed.

By default, anyone who signs up to become a developer only has access to the standard API endpoints, leaving no access to the Plus endpoints. These Plus endpoints are only available to stores which pay for Shopify Plus, and after digging into many Shopify discussion boards it was explained by a Shopify employee that developers need to work with a store who pays for Shopify Plus to get access to those Plus endpoints. The 404 error when accessing the API didn’t explain this and only added confusion to the situation.

One area that could be improved is that there is little mention of tiered developer accounts. The API should at least give a useful error message in the response’s body explaining what is needed to gain access to it.

Webhooks Could be Easier to Work With

The shopify_app gem provides a simple way to define any webhooks that should be registered with the Shopify API for the app to function. The defined webhooks are registered only once after the app is added to a store. During development you may add and remove many webhooks for your app. Since defined webhooks are only registered when the app is added to a store the most straightforward way to refresh the webhooks is to remove the app from the store and then add it again.

This can become pretty tedious which is why I did some digging around in the shopify_app code and created the following code sample to synchronize the required webhooks with the Shopify API. Simply hit this controller or call the containing code somewhere in the codebase.

If there’s a better solution to this problem please let me know.

Lastly, to keep track of your sanity the httplog gem is useful to track the http calls that shopify_app, shopify_api and any other gem makes.

Wrapping Up

The developer experience on the Shopify API and app store is quite pleasing. It has been around long enough to build up a flourishing community of people asking questions and sharing code. I believe the issues outlined above can be easily solved and will make Shopify a more pleasing platform.

The Software Engineering Daily Podcast is Highly Addictive

Over the past several months the Software Engineering Daily podcast has entered my regular listening list. I can’t remember where I discovered it, but I was amazed at the frequency at which new episodes were released and the breadth of topics. Since episodes come out every weekday there’s always more than enough content to listen to. I’ve updated My Top Tech, Software and Comedy Podcast List to include Software Engineering Daily. Here are a few episodes that have stood out:

Scheduling with Adrian Cockroft was quite timely as part of my final paper for my undergraduate degree focused on the breadth of topics in scheduling. Adrian discussed many of the principles of scheduling and related them to how they were applied at Netflix and earlier companies. Scheduling is really a necessity for software developers to know as scheduling occurs in all layers of the software and hardware stack.

Developer Roles with Dave Curry and Fred George was very entertaining and informative as it presented the idea of “Developer Anarchy”, a different structure to running, (or not running), development teams. Instead of hiring Project Managers, Quality Assurance, or DBAs to fill a specific niche of a development team, you mainly hire programmers and leave them to perform all of those tasks according to what they deem is necessary.

Infrastructure with Datanauts’ Chris Wahl and Ethan Banks entertained as much as it informed. This episode had a more casual setting as the hosts told stories and brought years of experience to the current and future direction of infrastructure in all layers of the stack. Comparing the current success of Kubernetes to the not-so-promising OpenStack was quite informative as it showed that multiple supporting organizations drove the OpenStack project to have different priorities and visions, whereas Google, being the single organization to drive Kubernetes, is shown to have one single, unified vision.


EDIT 2017-02-26 – Add Datanauts episode

Better Cilk Development With Docker

I’m taking a course that focuses on parallel and distributed computing. We use a compiler extension for GCC called Cilk to develop parallel programs in C/C++. Cilk offers developers a simple method for developing parallel code, and as a plus it now comes included in GCC since version 4.9.

The unjust thing with this course is that the professor provides a hefty 4GB Ubuntu virtual machine just for running the GNU compiler with Cilk. No sane person would download an entire virtual machine image just to run a compiler.

Docker comes to the rescue. It couldn’t be more space effective and convenient to use Cilk from a Docker container. I’ve created a simple Dockerfile containing the latest GNU compiler for Ubuntu 16.04. Here are some Gists showing how to build and run a Dockerfile which contain the dependencies needed to build and run Cilk programs.

Twenty-Two

Another grand year has gone by since my last birthday. Here’s my look back on the year as I turn 22 today. I’m on the edge of finishing my Computer Science degree (yay!), had a blast spending time vacationing with my family, and succeeded at completing a few personal goals, to name a few.

The view from above! – At Champlain Point Lookout in Gatineau Park

I have accomplished my goal of biking to Gatineau’s Champlain Lookout – a hefty 60km ride on a franken-road bike that safely got me there. Getting hooked on the Strava app has helped gamify my cycling fitness. When my Grandmother passed away this summer we spent five days in Owen Sound, a quiet town on Georgian Bay. I had the itch to go biking one day and ended up renting a proper road bike compared to what I have. Two hours later and I’ve cycled 42km, swam in the water and enjoyed the scenery pass by, completely satisfied.

This time last year ZDirect, the previous name of the company I work at, got acquired by TravelClick. Over the year we’ve hit home run after home run at delivering on making changes, namely a successful datacentre migration, rebranding of the UI, new profile screen and plenty of features to boot. We’re expanding rapidly: acquiring new office space, hiring more developers, account managers and support people. An incremental integration into TravelClick has been happening, involving processes, infrastructure and software. The sales figures show we’re doing something right. Not a few weeks from now I’ll be in New York for a short vacation. During this time I’ll stop by TravelClick headquarters in Times Square to say Hi and see if I can grab some swag.

Celebrating our hard work with a barbeque
Celebrating our hard work with a barbeque

My main achievement during this summer of working at TravelClick has been implementing a weekly Continuous Improvement meeting for my team where we improve our processes and software by discussing and planning items of action. Seeing the entire team engage and drive the discussion, including planning on what should be done is the truest sign of having succeeded, not to mention the improvements that are being made.

I have started listening to a lot more software-related podcasts. Learning FTW! It’s crazy the amount of information you can learn just by listening when you’re not doing any brain-intensive tasks.

Continuing to experiment with new recipes, I have begun making sourdough bread as it’s much healthier than your regular white loaf. Vegetables and fruits have become more prominent in my meals, as well as Thai coconut soups every once in a while. I’ve reduced the number of unhealthy foods I eat such as bucket-loads of homemade pizza and frozen foods like perogies.

christmas-vacation
No better way to surprise my aunt than with a diorama

On the topic of health, I was disciplined for the first half of the year when it came to performing workouts five to six nights a week. Freeletics is an excellent social exercise app that only requires your body, a pull-up bar and a small amount of space to do short but intense exercises. Besides the pre-programmed workouts, performing 250 sit-ups a night definitely helped get that six-pack ready for the February trip to the Dominican Republic. (On a related note, Long Island Iced Teas became my new favourite drink).

When the summer started it was an abrupt transition from school-life to work-life. I eventually stopped using the Freeletics workout app and instead went for hour-plus bike rides a couple of times a week, using Strava to track my distance. I miss the disciplined workout. I want to get back into the routine again when my joints aren’t complaining.

Having a second (but small) source of income as well as keeping my other skills sharp was prominent this past year. I performed some freelance logo design and web development with a great friend of mine. At the moment I feel like the work I’m doing is valued at less than what it should be. I plan on being actionable for future jobs by valuing my time more.

Here’s some raw metrics that represent part of what I have been up to over the past year:

A bad practice that I want to take control over is the number of hours of YouTube videos that I consume per week. I could easily spend that time reading, sleeping or getting things done. I feel that if I visualize my video consumption and set a goal of reducing the hours watched per week I will gain back valuable time.

Posing in the Dominican
Posing in the Dominican

Some good practices that I want to continue with this year is getting good sleep, writing every day, practicing mindfulness, taking notes while reading books, being fit and listening to podcasts and conference talks.

 

Reading about and using the Getting Things Done method is another goal of mine that I think would help me perform better and achieve more given all my work and personal tasks. Being able to organize better and be disciplined about getting things done will enable me to feel more fulfilled day-to-day.

Well time to publish this and pack it up for the night as it’s just past midnight. Today my friends and I are attending the Panda Games, an annual football game with a decades-old rivalry between Ottawa’s two big universities: Carleton and Ottawa U. It’s a big party and it’ll be a write-off for all of us.

Old Habits Die Hard: Copy and Paste

Copy and paste is bad.

Every single person who uses a computer learns how to copy and paste.

Copy and paste is necessary to perform many tasks.

Old habits die hard.

 

 

Email and Word documents and illegally downloaded movies all expect you to use copy and paste because it’s how you’re supposed to do things: copy this email into that folder, move that paragraph of text into the next chapter, copy those illegally downloaded movies to the external hard drive for safekeeping.

There’s a time and place for copy and paste, but why resort to it when you do the same task multiple times every day? It’s passable when the situation can’t be made better, or can it?

Sounds like old habits die hard.

 

 

Sure, copy and paste is quick when you’re good at it, but the time adds up. For example, take the process of navigating into a bunch of files and folders to copy the same five files to a different place. Let’s be earnest here and say it takes a minute of this theoretical person’s time. Based on the work they do, they repeat the same copy and paste job ten more times that day. This time adds up.

Yes, old habits die hard.

 

 

Humans are excellent at copy and paste. Guess what else is excellent at copy and paste as well? Computers!!! Computers are better than humans in every way possible when it comes to performing repetitive copy and paste tasks. Speed. Accuracy. Longevity. It’s a combination which doesn’t disappoint.

Luckily where my soliloquy is headed involves people who program computers for a profession: Programmers. Programmers write programs to make computers do things for humans. Copy and paste is one of them. So why are Programmers still using copy and paste to do things themselves repetitively instead of programming a computer to do it for them?

Old habits die hard…

 

 

Let this sink in for a moment…

 

 

Programmers are proficient in telling the computer what to do, namely copy and paste. But they’re still using copy and paste things themselves because they’re really good at it. It’s been a habit since they started using a computer however many decades ago.

This shocks me, especially in the sense where programmers are paid very well to program computers, but instead they’re spending a chunk of their time performing repetitive copy and paste tasks, not to mention they’re fully qualified to program the computer to do it for them.

It’s a bad habit of programmers to repetitively copy and paste. Knowing so and continuing to do must involve masochism. Be a better programmer and get the computer to copy and paste for you!

Implementing Agile Databases with Liquibase

We have an inconvenient problem. Our development databases are all snowflakes – snowflakes in the sense that each developer’s database has been hand updated and maintained at the leisure of the developer so that no two databases are alike.

database-scripts-directoryWe version our database changes into scripts with the creation date included in the name. But that’s where the database script organization and automation ends. There’s nothing to take those scripts and apply it to a local developer’s database. Just plain old copy and pasting to run new scripts. Adding to the pain is that the database scripts don’t go back to day 1 of the database. Instead, the development databases are passed around and copied whenever someone breaks their database and needs a new one or a new employee comes on board and needs to set up their development environment.

Manually updating our personal development database is problematic. Forgetting to run scripts can result in unknown side effects. Usually we don’t bother updating our database with the latest scripts until we really have to. That happens whenever we launch our app. Once the app starts complaining about missing tables or fields we’re on the hunt searching for the one script out of hundreds that would fix the problem.

liquibase_logoAs you can see, it is a system that is wasting the productivity of all developers, not to mention the frustration that happens when catching up after being behind for a long time. For a while now we’ve acknowledged that it’s a problem and should be fixed. A few of us looked into the problem and talked about using FlywayDB or Liquibase, but Liquibase seemed to be the best choice for us since it is more feature complete. Since that discussion one of our team members started experimenting with Liquibase and pushed that code to a branch, but it’s remained dormant for a while. I wouldn’t say integrating Liquibase into our development environment was abandoned because it was tough to do, rather I’m realizing that it is a common trend for developer tooling and continuous improvement to make way for feature development, bug fixing and level 3 support. Maybe our development team is just too small and busy to tackle these extra tasks or our bosses don’t realize the productivity sinkholes as significant and don’t allocate any time for improving it. I would like to spur some discussion around this.

Anyways, on with the rest of the post.

Look! The Proof of Concept is Working!

I spent the greater part of my Good Friday working on getting Liquibase working with our app. Partway through the day I got the production database schema into the Liquibase xml format and checked into source control. A few more hours were put into fixing minor SQL vs. MySQL issues with Liquibase’s import. (Who knew the BIT(1) type could have an auto increment? Liquibase disagrees).

Some time was spent creating a script at script/db (in the style of GitHub) for bootstrapping the Liquibase command with the developer database configuration.

Next I’ll mention some of the incompatibilities that I ran into while generating a Liquibase change log from the existing production database.

Generating a Change Log From an Existing Database

Liquibase offers a very helpful feature: being able to take an existing database schema and turn it into an xml change log that it can work with. The Liquibase website has documentation on the topic, but it doesn’t mention the slight incompatibilities that you may run into, particularly with data types.

Once the production database schema was converted into a liquibase change log, I pointed Liquibase to a fresh MySQL server running locally. Running the validate and update commands on the change log resulted in some SQL errors when executing. All of them were related to data type conversions. These problems were fixed by modifying the change log xml file manually.

The first issue was that the NOW() function wasn’t being recognized. Simple enough, just replace it with CURRENT_TIMESTAMP.

Next was Liquibase turning all of the timestamp data types into TIMESTAMP(19). Doing a search and replace for TIMESTAMP(19) to TIMESTAMP did the trick.

The same issue as above happened to all datetime data types. Doing a search and replace for datetime(6) to datetime worked as expected.

In the production database one table had a primary key with the data type of TINYINT(1). When Liquibase read this it converted the data type to BIT. It’s a known issue at the moment, but the fix is simple: change the type in the change log to some other data type like TINYINT (or TINYINT UNSIGNED). Make sure if this is a primary key that you update the foreign keys in the other tables, otherwise you’ll get errors when the foreign keys get applied.

This one was the weirdest. In the production database an index existed on a column of type mediumtext with no explicit length. The index was defined as a FULLTEXT. When Liquibase would create the database, it would fail when creating this index. After some googling it appears that the FULLTEXT index requires a length when operating on mediumtext. In the end, adding a (255) or however long to your FULLTEXT index data type fixes it.

Lastly, the tables from the production database were set to use the UTF-8 encoding and the InnoDB engine, but Liquibase doesn’t pick this up. The workaround for this was to append the following to every table definition in the Liquibase change set xml:

Next Steps

Because we provide a multitenancy SaaS offering, we drive a lot of behaviour of our app from the database. Whether it’s per customer feature toggles, a list of selectable fields, or email templates, a lot of data needs to be prepopulated in the database for the app to fully function.

The next bit of work involved with moving towards an agile database is to find all of the tables that contain data which are needed for the app to function. Liquibase offers methods of loading this data into the database by either loading data from a CSV file or by specifying the data in a change log.

Another important part of the database that needs to be checked in with Liquibase is the triggers and procedures. Liquibase doesn’t automatically extract the triggers and procedures so you’ll have to locate and export them manually.

Additionally, improving the developer experience by simplifying the number of things they have to do and know eases adoption and can make them more productive. Things like the configuration needed to run Liquibase, creating a new change log from a template and documentation of usage and best practices are all things that can bring a developer up to speed and make their life easier.

Lastly, there exists a Liquibase plugin for the Gradle build tool which makes it straightforward to orchestrate Liquibase with your  Gradle tasks. This would come in handy when Gradle is used to perform integration and any other form of automated testing in an environment which uses the database. Test data could be loaded in and cleaned up based on the type of testing.

Conclusion

automate-all-the-thingsNo developer likes to perform repetitive tasks, therefore minimize the pain by automating all the things. Developer tooling can be often overlooked. As a developer do yourself and your colleagues a favour and automate the tedious tasks into oblivion. As a manager, realize the inefficiencies and prioritize fixing it. Attack the tasks that take the most time or would provide the most value if automated, then just start picking at it piece by piece.

Liquibase was discussed and acknowledged as the solution to our developer database woes. Following through with integrating Liquibase into our developer environment and going a few steps further with making it easy to use leads to more time saved for actual work. Delaying the implementation of the solution results in losing out on the productivity gains that you’re well aware of. Any productivity increase is better for both the developer’s productivity, the developer’s happiness and the business as a whole.

My Top Tech, Software and Comedy Podcast List

Podcasts are an excellent source of entertainment and learning new things. I find that when I’m doing a mindless task like working out or commuting I can actively focus on something more interesting. Being a student at the moment, I have a lot of time going to and from classes, making food, and procrastinating. I fill up as much of that time listening to podcasts since I enjoy keeping up with the latest tech news, learning new skills and having a laugh.

Click here to jump to the list if you can’t wait.

My Podcast Listening History

I’ve been a huge listener to podcasts shortly before I got my first iPod (iPod nano 3rd generation, 8GB, turquoise) which was sometime during 2006, I think. Back then I started listening to a lot of the podcasts from the TWiT and Revision3 networks. Here I am now, just over 10 years later and I’m still addicted.

Having had a twice a week paper route gave me a lot of mindless time that I soon took over by listening to podcasts. I prominently remember delivering papers in my neighbourhood during a cold, Canadian winter night listening to an excellent holiday episode of Major Nelson Radio. I also remember laughing my ass off to Diggnation, hosted by Kevin Rose and Alex Albrecht, where they share the greatest posts from Digg that week.

Security Now! was a momentous podcast for me. I started listening to it around 2006 when they were at episode 60. Ever since then I’ve been a listener. I can’t thank Steve and Leo enough for their excellent discussion on current security issues and in-depth episodes on various technologies, like their how the internet works series and explanation of the Stuxnet virus. Because I’ve been listening to Security Now! for so long I’ve learned so much about security and the web that I’m practically acing the fourth year computer security class at my University. The valuable knowledge learned will stick around with me forever, already a great asset for my professional career.

The List

Here’s my list of favourite podcasts over the years, all categorized by genre.

Technology

Keeping up with the latest in tech is a given when you’re heavily immersed in the ecosystem, less being a computer scientist.

Security Now!

Security Now Logo

One of the reasons why I’m studying computer science is because of the Security Now! podcast. Every week, Steve and Leo discuss the most interesting and current topics in security. Whether that’s huge corporate hacking, the latest ransomware, IoT security or even various health topics, it’s a polymath of useful information for anyone who’s interested in security.

This Week in Tech

TWiT LogoThe best source for tech news, This Week in Tech is hosted by Leo Laporte, a hero of mine for creating the TWiT network, and continually educating me. I wouldn’t know half of what I know now if it wasn’t for Leo’s work. Each week the latest and greatest tech news is dissected with a representative panel of tech journalists. It’s very informative to hear experts in the area give their opinion.

I remember when Twitter was getting big, it was all that TWiT would talk about for weeks on end. It was even expected: “What Twitter news do we have this week?” was saild by Leo almost every episode when Twitter was growing. Those were the days, when Leo was the #1 user on Twitter. Then the masses came and it went to shit. Okay, I still love Twitter. Rant over.

Hak5

Hak5 LogoBasically a technology hacker/DIY/hardware/software show with a lot of original content. I really got interested in Linux and hacking because of it. Just recently I saw that they’re working on quadrocopters. So cool! One of my favourite segments was the usb multibooting using grub. No need to burn multiple CDs for all of your live-boot isos and images, just put them all on one USB stick and give it a shiny menu to choose which one to use. The show has a kick-ass soundtrack and it looks like they’ve expanded to a new studio and are now producing multiple shows. These guys have grown a lot!

Maximum PC No BS Podcast

Almost forgotten, I remembered this one as I was building the Runners Up section. The Maximum PC No BS Podcast could also go under the Comedy section, but Technology suits it better. On that same paper route I had when I was young I listened to this podcast religiously as soon as each episode came out. Gordon Mah Ung and Will Smith were a perfect pair when talking shop about computer hardware, tech news and building computers for the Maximum PC magazine.

Besides being overly frustrated about certain things, Gordon had a segment called Gordon’s Rant of the Week where he would vent about anything and everything from Star Wars to breaking motherboards to shitty software. Every new year there’s usually a best of Gordon’s rants episode, which is a must listen if you find Gordon’s rants funny.

Comedy

These podcasts are timeless. You can go back and listen to all of them like I’ve done.

Rooster Teeth Podcast

One of the funniest podcasts, various members of the Rooster Teeth company talk about ridiculous stories, gaming, current news and Science. They really don’t know much about Science, but the cast always tries to argue it out until someone say’s something so illogical, the cast and crew burst out into laughter. Moments like these are animated into short videos and posted to their YouTube channel as Rooster Teeth Animated Adventures.

It was the summer of 2013, between my first and second years of university, I was living back home trying to find work. I landed this landscaping job, being paid directly by the owners of a large Caledon estate, to fix up their property. That landscaping was fun but hard work. I discovered the Rooster Teeth Podcast early into the summer. Each day, I would be listening to maybe five or six episodes in an eight hour day. I blew through the backlog of episodes really fast and ended up listening to them all before the end of the summer.

Diggnation

What’s the latest crap from Digg, you might ask? Kevin Rose and Alex Albrecht answered this tough question for 340 episodes from 2005 to 2012. The two would discuss the most interesting news bits on whichever sofa they landed on. Often very entertaining, Diggnation had me dying of laughter.

Software Development

Most, if not all of these Software Development podcasts are timeless. A lot of the topics discussed are still useful today. The only real difference is the adoption of the tools and methodologies. I usually look through the list of earlier episodes and listen to the ones that catch my eye. Once you’re hooked on a podcast, it’s not hard to find yourself downloading and listening to everything they have available.

The Ship Show

Sadly, this podcast just announced they’re ending the show a few days ago, so I’m still in my mourning stage at the moment, but The Ship Show has been a fun and informative source of everything release engineering, DevOps, and build engineering in big and small companies. The panel discusses new tools, methods and philosophies for improving parts of your tech company, often from firsthand experience. What makes this podcast special is that they delve into more of the technical and implementation details, which is great if you’re into that.

Arrested DevOps

The ADO podcast is made for people who don’t exactly know what this whole DevOps thing is about but would like to know. Matt Straton, the creator of the podcast makes this point often as he has learned DevOps from scratch. Each episode goes into depth on a DevOps related subject, often having guests from the industry who are knowledgeable in the topic to add more value to the discussion. A lot of the topics discussed are higher level than what is offered in The Ship Show, but Arrested DevOps is still as valuable since its important to understand the big picture and ask the big questions. Both Arrested DevOps and The Ship Show are complimentary to each other.

Software Engineering Radio

Sponsored by the IEEE, this podcast offers excellent interviews on a variety of Software Engineering topics. The episodes mainly consist of two or three people discussing a specific topic, whether it’s a technology or methodology. The time is taken to give listeners a good idea about the purpose and it’s usefulness. The interviewer often does their homework before performing the interview and therefore asks well thought out questions. Because the episodes cover such a wide breadth of topics, surfing through the past episodes is a must!

Software Engineering Daily

If Software Engineering Radio wasn’t good enough, Software Engineering Daily applies the same format and content to a daily schedule. The amazing producer and interviewer Jeff, also a current host on Software Engineering Radio, has amassed hundreds of episodes covering everything from technology, to business, to soft skills – all pertinent to any software engineer. Dozens of hours of content can be queued up for listening just by skimming through the history of episodes.

I wrote a post on marketing yourself from episode 243 with John Sonmez.

Runners up

Here’s a few podcasts that I’ve listened to for a long time, but didn’t make the list:

Floss Weekly – Randal Schwartz and other hosts interview open source software projects to share what the project is about. Generally pretty interesting, it’s cool to hear what people are doing in subjects that you’re usually not interested in or didn’t know existed.

Tekzilla – Great segments! Veronica Belmont and Patrick Norton were a killer team and shared great tips and tricks to do with technology.

Windows Weekly – Paul Thurott had the perfect level of satire as he talked about Windows products that no one uses, like Windows Home Server (I used it, so I can bash it), and things that people use, like Xbox and new Windows operating systems.

This Week in Google – Gina Trapani and Jeff Jarvais have excellent discussions about the cloud and everything Google.

Mahalo Daily – Veronica Belmont was the best in this short daily podcast format!


Update 2017-01-19: Added Software Engineering Daily