Installing Go the Right Way

100% Derpy

It’s a pain to get the latest version of the Go programming language installed on a Debian or Ubuntu system.

Oftentimes if your operating system is not the latest then there is a slim chance that there will be an up to date version of Go available. You may get lucky and find newer versions of Go in some random person’s PPA, where they’ve backported newer versions to older operating systems. This works, but newly released versions are reliant on the package maintainer to update and push it out.

Other options of installing the latest version of Go may involve building the package from source. This method is often tedious and can be error prone with the number of steps involved. Not exactly for the faint of heart.

Command line tools have been built for certain programming languages to streamline the installation of new versions. For Go, GVM is the Go Version Manager. Inspired by RVM, the Ruby Version Manager, GVM makes it quite simple to install multiple versions of Go, and to switch between them with one simple command.

The only downside that GVM has is that it’s not installed via a system package (eg. a deb file). Don’t let that worry you too much though! Installation is as simple as running the following curl-bash, and then using the GVM command to start installing different versions of Go. Here’s the installation guide/readme.

bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)

One confusing point when using GVM to install the latest version of Go resulted in a failed installation. This made no sense. Eventually RTFM’ing resulted in understanding that you first have to install an earlier version of Go to “bootstrap” the installation of any version of Go later than 1.5. Explained here in more detail.

After following their instructions to install Go 1.4 it was now possible to install the latest version of Go and get on with coding!

Private Docker Repositories with Artifactory

A while ago I was looking into what it takes to setup a private Docker Registry. The simplest way involves running the vanilla Docker Registry image and a small amount of configuration (vanilla is used to distinguish the official Docker Registry from the Artifactory Docker Registry offering). The vanilla Docker Registry is great for proof of concepts or for people who want to design a custom solution, but in organizations where there are multiple environments (QA, staging, prod) wired together using a Continuous Delivery pipeline – JFrog Artifactory is well suited for the task.

Artifactory, the fantastic artifact repository for storing your Jars, Gems, and other valuables has an extension to host Docker Repositories to store and manage Docker images as first-class citizens of Artifactory.

Features

Here’s a few compelling features which make Artifactory worthwhile over the vanilla Docker Registry.

Role-based access control

The Docker Registry image doesn’t come with any fine-grained access control. The best that can be done is either allowing or disallowing access to all operations on the registry by the use of a .htpasswd file. In the best scenario, each user of the registry has their own username and password.

Artifactory uses its own fine-grained access control mechanisms to secure the registry – enabling users and groups to be assigned permissions to read, write, deploy, and modify properties. Access can be configured through the Artifactory web UI, REST API, or AD/LDAP.

Transport Layer Security

If enabled, Artifactory will use the same TLS encryption it uses for Docker Registries. Unlike a vanilla Docker Registry, there is no need to setup a reverse proxy to tunnel all insecure HTTP connections over HTTPS. The web UI offers a screen to copy and paste authentication information for connecting to the secured Artifactory Registry.

Data Retention

Artifactory has the option to automatically purge old Docker images when the unique number of tags has grown to a certain size. This keeps the number of available images, and therefore the storage space, within reason. The results of not having old images purged can lead to running out of disk space, or for you cloud users, expensive object storage bills.

Image Promotion

Continuous delivery defines the concept of pipelines. These pipelines represent the flow of commits from when a developer checks in their code to the SCM, all the way through CI, and eventually into production. Continuous Delivery organizations who have chosen to use multiple environments for validating their software changes would “promote” a version of the software from one environment to the next. A version would only be promoted if it passed the validation requirements for that environment.

For example, the promotion of version 123 would first go through the QA environment, then the Staging environment, then the Production environment.

Artifactory includes Docker image promotion as a first-class feature, separating it from the vanilla Docker Registry. What would be a series of manual steps, or a script to run, is now a single API endpoint to promote a Docker image from one registry to another.

Browsing for Images

The Artifactory UI already has the ability to look at various artifacts contained in Maven, NPM, and other types of repositories. It was only natural to offer the same service for Docker Registries. All images of a repository can be listed and searched upon. Images can be further described by showing the various tags and layers that compose it.

The current vanilla Docker Registry doesn’t have a GUI. It is only through third-party projects that a GUI can be provided to offer the same functionality as Artifactory.

Remote Registries

Artifactory has the ability to provide a caching layer for registries. Performance is gained when images and metadata are fetched from the cached Artifactory instance, preventing the time and latency incurred from going to the original registry. Resiliency is also gained since the Artifactory instance can continue serving cached images and metadata to satisfy client requests even when the remote registry has become unavailable. (S3 outage anyone?)

Virtual Registries

Besides hosting local and caching remote registries, virtual registries is a combination of the two. Virtual registries unite images from a number of local and remote registries, enabling Docker clients to conveniently use just a single registry. Administrators are then able to change the backing registries when needed, requiring no change on the client’s side.

This is most useful for humans who need ad hoc access to multiple registries that correspond to multiple environments. For example, the QA, Staging, Production, and Docker Hub registries can be combined together, making it seem like one registry to the user instead of four different instances. Machines running in the Production environment, for example, could only have access to the Production Docker Registry, thereby preventing any accidental usage of unverified images.

Conclusion

Artifactory is a feature rich artifact tool for Maven, NPM, and many other repository types. The addition of Docker Registries to Artifactory provides a simple solution that caters towards organizations who are implementing Continuous Delivery practices.

If you’re outgrowing an existing vanilla Docker Registry, or entirely new to the Docker game then give Artifactory a try for your organization, it won’t disappoint.

Practicing User Safety at GitHub

GitHub explains a few of their guidelines for harassment and abuse prevention when they’re developing new features. Some of the interesting points in the article include a list of privacy-oriented questions to ask yourself when developing a new feature, providing useful audit logs for retrospectives, and minimizing abuse from newly created accounts by restricting access to the service’s capabilities. All of these points taken into consideration make it harder for abuse to occur, making the service a better environment for its users.

See the original article.

A few Gotchas with Shopify API Development

I had a fun weekend with my roommate hacking on the Shopify API and learning the Ruby on Rails framework. Shopify makes it super easy to begin building Shopify Apps for the Shopify App Store – essentially the Apple App Store equivalent for Shopify store owners to add features to their customer facing and backend admin interfaces. Shopify provides two handy Ruby gems to speed up development: shopify_app and shopify_api. An overview of the two gems are given and then their weaknesses are explained.

Shopify provides a handy gem called shopify_app which makes it simple to start developing an app for the Shopify App Store. The gem provides Rails generators to create controllers, add webhooks, configure the basic models and add the required OAuth authentication –  just enough to get started.

The shopify_api gem is a thin wrapper of the Shopify API. shopify_app integrates it into the controllers automatically, making requests for a store’s data very simple.

Frustrations With the API

The process of getting a developer account and developer store created takes no time at all. The API documentation is clear for the most part. Though attempting to develop using the Plus APIs can be frustrating when using the APIs for the first time. For example, querying the Discount API, Gift Card API, Multipass API, or User API results in unhelpful 404 errors.  The development store’s admin interface is misleading as a discounts section can be accessed where discounts may be added and removed.

By default, anyone who signs up to become a developer only has access to the standard API endpoints, leaving no access to the Plus endpoints. These Plus endpoints are only available to stores which pay for Shopify Plus, and after digging into many Shopify discussion boards it was explained by a Shopify employee that developers need to work with a store who pays for Shopify Plus to get access to those Plus endpoints. The 404 error when accessing the API didn’t explain this and only added confusion to the situation.

One area that could be improved is that there is little mention of tiered developer accounts. The API should at least give a useful error message in the response’s body explaining what is needed to gain access to it.

Webhooks Could be Easier to Work With

The shopify_app gem provides a simple way to define any webhooks that should be registered with the Shopify API for the app to function. The defined webhooks are registered only once after the app is added to a store. During development you may add and remove many webhooks for your app. Since defined webhooks are only registered when the app is added to a store the most straightforward way to refresh the webhooks is to remove the app from the store and then add it again.

This can become pretty tedious which is why I did some digging around in the shopify_app code and created the following code sample to synchronize the required webhooks with the Shopify API. Simply hit this controller or call the containing code somewhere in the codebase.

If there’s a better solution to this problem please let me know.

Lastly, to keep track of your sanity the httplog gem is useful to track the http calls that shopify_app, shopify_api and any other gem makes.

Wrapping Up

The developer experience on the Shopify API and app store is quite pleasing. It has been around long enough to build up a flourishing community of people asking questions and sharing code. I believe the issues outlined above can be easily solved and will make Shopify a more pleasing platform.

The Software Engineering Daily Podcast is Highly Addictive

Over the past several months the Software Engineering Daily podcast has entered my regular listening list. I can’t remember where I discovered it, but I was amazed at the frequency at which new episodes were released and the breadth of topics. Since episodes come out every weekday there’s always more than enough content to listen to. I’ve updated My Top Tech, Software and Comedy Podcast List to include Software Engineering Daily. Here are a few episodes that have stood out:

Scheduling with Adrian Cockroft was quite timely as part of my final paper for my undergraduate degree focused on the breadth of topics in scheduling. Adrian discussed many of the principles of scheduling and related them to how they were applied at Netflix and earlier companies. Scheduling is really a necessity for software developers to know as scheduling occurs in all layers of the software and hardware stack.

Developer Roles with Dave Curry and Fred George was very entertaining and informative as it presented the idea of “Developer Anarchy”, a different structure to running, (or not running), development teams. Instead of hiring Project Managers, Quality Assurance, or DBAs to fill a specific niche of a development team, you mainly hire programmers and leave them to perform all of those tasks according to what they deem is necessary.

Infrastructure with Datanauts’ Chris Wahl and Ethan Banks entertained as much as it informed. This episode had a more casual setting as the hosts told stories and brought years of experience to the current and future direction of infrastructure in all layers of the stack. Comparing the current success of Kubernetes to the not-so-promising OpenStack was quite informative as it showed that multiple supporting organizations drove the OpenStack project to have different priorities and visions, whereas Google, being the single organization to drive Kubernetes, is shown to have one single, unified vision.


EDIT 2017-02-26 – Add Datanauts episode

Hook, Line and Sinker: How My Early Interests Got Me to Where I Am Today

Having an exciting software job at a newly-acquired company is opening up so many possibilities, and making possible the projects that I want to accomplish to make things better. Whether its big projects like containerizing our multiple apps for scalability, implementing Continuous Delivery to ship our software faster, or smaller projects like versioning our dependencies for traceability and ease of deployment, or updating to the latest Java version for performance improvements and to use all the new libraries; it’s nonstop fun that upgrades my problem solving skills, improves the lives of our team and customers, and gives me a track record of making positive change.

After finishing school I can focus more on teaching myself new skills and technologies that I can use and apply during my professional career. Currently I listen to DevOps and software podcasts when I’m traveling to places and I read a few articles about Docker and other technologies when I have free time. My next logical step is to start applying the knowledge I’ve gained both at work and as side projects.

At least it’s not all bad in the academia life: this fourth year Computer Security class that I’m taking is immensely fun. I’m glad that I have a captivating class this semester. I can give credit to the Security Now podcast for which I’ve listened to around 500 of its 543 episodes as of this writing (read: 10 years of listening!), for giving me the practical knowledge of current security practices and news, diving deep into the details where necessary.

Dr Anil Somayaji, the professor of the Computer Security course, is an excellent lecturer and a hacker at his roots. His interactive teaching style makes the possibly dry subject of security interesting (if you think of security as dry. Who would?), and his course work is very useful in that it promotes self teaching and helping out others. Each week every student must submit a hacking journal. It consists of the student writing about their adventures, frustrations, successes and failures of hacking on security related things – whether that involves using Metasploit to break into a computer, putting a backdoor into OpenSSH, figuring out how to configure a firewall, etc. The list goes on and on. An online chatroom is used to share resources and chat with other members of the class to figure out hacking problems and interact with the professor. (Other classes should definitely start using this)

I’m glad to have had the drive to explore and learn when I was young. Throughout my childhood I would spend my time hacking gaming consoles, jailbreaking iPods, experimenting with Linux, and most of all having a website! Not this website, there was a website before jonsimpson.ca. It was jonniesweb.com. I prominently remember creating logos for my website in MS Paint, printing them out and putting it up beside the Canadian flag that was posted in my fifth grade class. I would use MS Front Page 97 to add jokes, pictures, cheat codes, YouTube and Google Video links, games, and referrals to other friend’s Piczo sites. I remember going through a few designs: spaced themed, blue themed, red themed… I even got interested in PHP and used a sweet template. Each iteration improving with content and coding skills.

Then middle school and high school caught up with me and I stopped updating the website. Sooner or later my dad stopped supporting my hobby, eventually letting the web hosting expire.

Fast forward a few years and what was once a childhood interest has turned into an education and career choice. Building a website sparked the fire, pursuing a degree gave me the drive, and doing co-op (soon to be full-time) at work has shown me the many different problems to be solved.

My plan is to work my ass off in all of my classes, finish up my degree and follow my passions, utilizing my knowledge and expressing my solutions at both my job and in my blog. Ultimately trying to build a successful and happy career.

At the moment I’m just glad that I don’t have a crappy professor.

Push-button Deployment of a Docker Compose Project

I was recently working on figuring out how to automate the deployment of a simple docker compose project. This is a non mission-critical project that consisted of a redis container and a Docker image of Hubot that we’ve built. Here’s the gist of the docker-compose.yml file:

Whenever a new version of the zdirect/zbot image is updated and published to the registry a deploy script can be run. For example, the script used to automatically deploy a new version of a Docker Compose project is shown here:

Yup, that’s all. Its that simple. Behind the curtains, this command pulls the latest version of the image. Since the docker-compose.yml file doesn’t specify a tag it defaults to latest. The old container is then removed and a new one started. Any data specified in volumes are safe since its mounted on the host and not inside the container. Obviously a more complicated project would have a more involved deployment, but simpler is better!

Integrating this deployment script into Rundeck, Jenkins or your build tool of choice is a piece of cake and isn’t covered here, but might be in a future post. This automation allows you to bridge the gap between building your code and running it on your servers, aka the last-mile problem of continuous delivery.

Acquisition, Docker and Rundeck

travelclick-logoZDirect, the wonderful company I work for has been acquired by TravelClick, a billion dollar hospitality solutions company.

First of all: Woohoo! I can’t be more excited to be around at this time to jump-start my career.

One of the changes to occur as soon as possible is the consolidation of our datacentre into TravelClick’s. One of our devs recently found out about Docker and got interested about its power (Hallelujah! Finally it’s not just me who’s interested). Later I bring up Rundeck, a solution to organizing our ad-hoc and scheduled jobs that will assist in the move to Docker.

Docker

docker-largeHis plan is to Dockerize everything we have running in the datacentre to make it easier for our applications to be run/deployed/tested/you-name-it. The bosses are fine with that and are backing him up. I’m excited since this is a fun project right up my alley.

Since I’m working my ass off trying to finish my degree, I’m only in one day of the week to wash bottles and offer some Docker expertise. Last Friday I had a good chat with the dev working on the Docker stuff. We chatted about Kubernetes, Swarm, load balancing, storage volumes, registries, cron and the state of our datacentre. It was quite productive since we bounced ideas off of each other. He’s always busy, juggling a hundred things at once so I offered to give him a hand setting up a Docker registry.

By the end of the day I had a secure Docker registry running on one of our servers with Jenkins building an example project (ZBot, our Hubot based chatroom robot), and pushing the image to the registry after it is built. An almost complete continuous delivery pipeline. What would make this better is a way to easily deploy the newly created Docker image to a host.

Rundeck

rundeck-logoRundeck is a job scheduler and runbook automation tool. Aka it makes it easy to define and run tasks on any of the servers in your datacentre from a nice web UI.

Currently, we have a lot of cron jobs across many servers scheduled to run periodically for integration, backup and maintenance. We also ssh into servers to run various commands for support and operations purposes.

Here’s the next piece to our puzzle. Rundeck can fit into many of our use-cases. A few of them are as follows:

  • Deployment automation (bridge the gap between Jenkins and the servers)
  • Run and monitor scheduled jobs
  • Logging and accountability for ad-hoc commands
  • Integrate with our chatroom, for all the ChatOps
  • Automate more of production

As we move towards Dockerizing all of our apps, we have to deal with the issue of what we’re going to do with the cron jobs and scheduled tasks that we run. Since we’re ultimately going to move datacentres it makes the most sense to take the cron jobs and turn them into scheduled jobs in Rundeck. That way we can easily manage them from a centralized view. Need to run this scheduled job on another machine? No problem, change where the job runs. Need to rerun the scheduled job again? Just click a button and the scheduled job runs again.

The developers wear many hats. Dev and ops are two of them. Because we’re jumping from mindset to mindset it makes sense to save time by automating the tedious while trying not to get in the way of others. Rundeck provides the automation and visibility to achieve this speed.

With the movement of putting all our apps and services into Docker containers, Rundeck will allow us to manage that change while being able to move fast.


If you’re interested in joining us on this action packed journey, we’re hiring.

Twenty-one!

“Why am I not in the US of A right now?” Good question. That’s what my dad asked me when he congratulated me on my 21st birthday this past October 1st.

Not being in the US this Thursday evening for academic reasons, I spent Wednesday night with my roommates grabbing a few exotic beers at our local pub. The past week has been heavy with assignments and a midterm already, which is why this post has been delayed – but enough with the excuses!

Some of the achievements or big changes that have occurred since my last birthday have been the following:

  • Visited and vacationed in the USA for the first time
  • Moved into a new (and nicer) house with my roommates
  • Became a happier person through mindfulness
  • Expanded my beer and wine taste

This past year I’ve also been working on a bunch of small projects, some of them I’ve been actively working on, others I haven’t. I’ve pressed 3,139,757 keys and 769,532 mouse clicks, made 310 contributions to repos on my GitHub, read 5 books, and am still reading 6 books.

I’m more than half-way through my Computer Science program at Carleton. 3.5 years in and I’ve got less than a year and a half to go. It gets more exciting as the topics I’m studying get more advanced and interesting. I’m looking forward to finishing and starting my full-time career.

I must have read a countless number of articles; pretty much every day covering topics from Docker, continuous delivery, microservices, distributed systems, coding, and programming languages. I’m amazed at how much I’ve read and how I apply it to my work and experiment.

This year I really want to make drastic improvements to the operations side of my job at ZDirect, learn Go, Ruby and Coffeescript (in that order), become more mindful, and get into shape (because keyboard arms aren’t sexy).