How Does Symmetric and Public Key Encryption Work?

With the release of Rails 5.2 and the changes with how secrets are securely stored, I thought it would be timely to write about the benefits and downsides of secrets management in Rails. It would be valuable to compare how Rails handles secrets, how Shopify handles secrets, and a few other methods from the open source community. On my journey to write about this I got caught up in explaining how symmetric and public key encryption work. So the post comparing different Rails secret management gems will have to wait until another post.

Managing secrets is now more challenging

A majority of applications created these days integrate with other applications – whether it’s for communicating with other business-critical systems, or purely operational such as log aggregation. Secrets such as usernames, passwords, and API keys are used by these apps in production to communicate with other systems securely.

The early days of the Configuration Management, and then later the DevOps movements have rallied and popularized a wide array of methodologies and tools around managing secrets in production. Moving from a small, artisanal, hand-crafted set of long-running servers to the modern short-lifetime cloud instance paradigm now requires the discipline to manage secrets securely and repeatedly, with the agility to revoke and update credentials in a matter of hours if not minutes.

While there’s many ways to handle secrets while developing, testing, and deploying Rails applications, it’s important to bring up the benefits and downsides to the different methods, particularly around production. Different levels of security, usability, and adoption exist with different technologies. Public/private key encryption, also known as RSA encryption, is one of the technologies. Symmetric key encryption is also another common encryption technology.

There exist many ways to handle secrets within Rails and webapps in general. It’s important to understand the underlying concepts before settling on one method or another because making the wrong decision may result in secrets being insecure, or the security being too hard to use.

Let’s first discuss the different types of encryption that are characteristic of the majority of secret management libraries and products out there.

Symmetric Key Encryption

Symmetric key encryption may be the simplest form of encryption to understand, but don’t let that trick you into thinking that it’s not secure. Symmetric key encryption involves one key used to both encrypt and decrypt data. This key will have to be kept secret and only be shared with trusted people and systems. Once secrets are encrypted with the key, that encrypted data can be readily shared and transferred without worry of the unencrypted data being read.

A simple example of symmetric key encryption can be explained. The most straightforward method utilizes the binary XOR function. (This example is not representative of state of the art symmetric key encryption algorithms in use, but it does get the point across). The binary XOR function means “one or the other, but not both”. Here is an example that shows the complete set of inputs and outputs for one binary digit:

1 XOR 1 = 0
1 XOR 0 = 1
0 XOR 1 = 1
0 XOR 0 = 0

A more complicated example would be:

10101010 XOR 01010101 = 11111111
11111111 XOR 11111111 = 00000000
11111111 XOR 01010101 = 10101010

Note that line 1 and 3 are related. The output of line 1 is part of the input of line 3. The second parameter of line 1 is used as the second parameter of line 3 too. Notice that the output of line 3 is the same as the first input of line 1. As demonstrated here, the XOR function will return the same input if the result of the function is fed back into itself a second time. A further example will show this property.

Given the property that any higher form of data representation can be broken down to binary, we can then show the example of hexadecimal digits being XOR’ed with another parameter.

12345678 XOR deadbeef = cc99e897

Given the key is the hexadecimal characters deadbeef and the data to be encrypted is 12345678, the result of the XOR is the incomprehensible result cc99e897. Guess what? This cc99e897 is encrypted. It can be saved and passed around freely. The only way to get the secret input (ie. 12345678) is to XOR it again with the key deadbeef. Let’s see this happen!

cc99e897 XOR deadbeef = 12345678

Fact check it yourself if you don’t believe me, but we just decrypted the data! This is the simplest example of course, so there’s a lot more that goes into symmetric key encryption that keeps it secure. Things like block-based, and stream-based algorithms, and even larger key sizes augment the simple XOR algorithm to make it more secure. It may be simple for someone who wants to break the encryption to guess the key in this example, but it becomes much harder the longer the key size is.

This is what makes symmetric key encryption so powerful – the ability to encrypt and decrypt data with a single key. With this property comes the need to keep this single key secret and separate from the data. When symmetric key encryption is used in practice, the smaller amount of people and systems that have the key the better. Humans can easily lose the key, leave jobs, or worse: share the key with people of malicious intent.

Public Key Encryption

Quite opposite to how symmetric key encryption works, public key encryption, (or asymmetric key encryption, or RSA encryption) uses two distinct keys. In its simplest form the public key is used for encryption and the private key is used for decryption. This method of encryption separates the need for the user who is encrypting the data from having the ability to decrypt the data. Put plainly, it allows for anyone to encrypt data with the public key while the owner of the private key is the only one able to decrypt the data. The public key can be shared with anyone without compromising the security of the encrypted data.

Some tradeoffs between symmetric and public key encryption is that the private key (the key used to decrypt data) is never shared with other parties, whereas the same key is used in symmetric key encryption. Also, a downside of public key encryption is that there are multiple keys to manage, therefore it brings a higher level of overhead compared to symmetric key encryption.

Let’s dig into a simple example. Given a public key (n=55, e=3) and a private key (n=55, d=27) we can show the math behind public key encryption. (These numbers were fetched from here).

Encrypting

To encrypt data the function is:

c = m^e mod n

Where m is the data to encrypt, e is the public key, mod is the modulus function, n is the shared modulus, and c is the encrypted data.

For the number 42 to be encrypted we can plug it into the formula quite simply:

c = 42^3 mod 55
c = 3

c = 3 is our encrypted data.

Decrypting

Decrypting takes a similar route. For this a similar formula is used:

m = c^d mod n

Where c is the encrypted data, d is part of the private key, mod is the modulus function, n is the shared modulus, and m is the decrypted data. Lets decrypt the encrypted data c = 3:

m = 3^27 mod 55
m = 42

And there we have it, our decrypted data is back!

As we can see, a separate key is used for encryption and decryption. It’s worth restating that this example here is very simplified. Many more mathematical optimizations, and larger key sizes are used to make public key encryption secure.

Signing – a freebie with public key encryption

Another benefit to using RSA public and private keys is that given the private key is only held by one user, that user can sign a piece of data to verify that it was them who actually sent it. Anyone who has the matching public key can verify that the data was signed by the private key and that the data was not tampered with during transit.

When Bob needs to receive data from Alice and Bob needs to be sure it was sent by Alice, as well as not tampered with while being sent, Alice can hash the data and then encrypt that hash with her private key. This encrypted hash is then sent along with the data to Bob. Bob can then use Alice’s public key to decrypt the hash and compare it to a hash of the data that he performs. If both of the hashes match, Bob knows that the data was truly from Alice and was not tampered with while being sent to him.

Wrapping up

To pick one method of encryption as the general winner at this abstract level is nonsensical. It makes sense to have a use case and pick the best encryption method for it by finding the best fit at the abstract level first, then finding a library which offers that method of encryption.

A following post will go into the tradeoffs between different encryption methods in relation to keeping secrets in Ruby on Rails applications. It will take a practical approach, explaining some of the benefits of one encryption method over another, and then give some examples of well-known libraries for each category.

Parallel GraphQL Resolvers with Futures

My team and I are building a GraphQL service that wraps multiple RESTful JSON services. The GraphQL server connects to backend services such as Zendesk, Salesforce, and even Shopify itself.

Our use case involves returning results from these backend services all from the same GraphQL query. When the GraphQL server goes out to query all of these backend services, each backend service can take multiple seconds to respond. This is a terrible experience if queries take many seconds to complete.

Since we’re running the GraphQL server in Ruby, we don’t get provided the nice asynchronous IO that would come with the NodeJS version of GraphQL. Because of this, the GraphQL resolvers run serially instead of in parallel – thus a GraphQL query to five backend services which take one second each to fetch data from will result in the query taking five seconds to run.

For our use case, having a GraphQL query that takes five seconds is a bad experience. What we would prefer is 2 seconds or less. This means performing some optimizations when GraphQL goes to do the HTTP requests to the backend services. Our idea is to parallelize those HTTP requests.

First Approaches

To parallelize those HTTP requests we took a look at non-blocking HTTP libraries, different GraphQL resolvers, and Ruby concurrency primitives.

Typhoeus

Knowing that running the HTTP requests in parallel is the direction to explore, we first took a look at the Ruby library Typhoeus. Typhoeus offers a simple abstraction for performing parallel HTTP requests by wrapping the C library libcurl. Below is one of the many possible ways to use Typhoeus.

After playing around with Typheous, we quickly found out that it wasn’t going to work without extending the GraphQL Ruby library. It became clear that it was nontrivial to wrap a GraphQL resolver’s life cycle with a Hydra from Typhoeus. A Hydra basically being a Future that runs multiple HTTP requests in parallel and returns when all requests are complete.

Lazy Execution

We also took a look at the GraphQL Ruby’s lazy execution features. We had a hope that the lazy execution would automatically optimize by running resolvers in parallel. It didn’t. Oh well.

We also tried a perverted version of lazy execution. I can’t remember why or how we came up with this method, but it was obviously overcomplicated for no good reason and didn’t work 😆

Threads and Futures

We looked back and understood the shortcomings of the earlier methods – namely, we had to find a concurrency method that would allow us to do the HTTP requests in the background without blocking the main thread until it needed the data. Based on this understanding we took a look at some Ruby concurrency primitives – both Futures (from the Concurrent Ruby library), and Threads.

I highly recommend using higher-order concurrency primitives such as Futures, and the like because of their well-defined and simple APIs, but for hastily hacking something together to see if it would work I experimented with Threads.

My teammate ended up figuring out a working example of Futures faster than I could hack my threads example together. (I’m glad they did, since we’ll see why next.) Here is a simple use of Futures in GraphQL:

It’s not clear at first, but according to the GraphQL Ruby docs, any GraphQL resolver can return the data or can return something that can then return the data. In the code example above, we use the latter by returning a Concurrent::Future in each resolver, and having the lazy_resolve(Concurrent::Future, :value!) in the GraphQL schema. This means that when a resolver returns a Concurrent::Future, the lazy_resolve part tells GraphQL Ruby to call :value! on the future when it really needs the data.

What does all of this mean? When GraphQL goes to fulfill a query, all the resolvers involved with the query quickly spawn Futures that start executing in the background. GraphQL then moves to the phase where it builds the result. Since it now needs the data from the Futures, it calls the potentially blocking operation value! on each Future.

The beautiful thing here is that we don’t have to worry about whether the Futures have finished fetching their data yet. This is because of the powerful contract we get with using Futures – the call to value! (or even just value) will block until the data is available.

Conclusion

We ended up settling on the last design – utilizing Futures to allow the main thread to put as much asynchronous work into background.

As seen through our thought process, all that we needed was to find a way to start execution of a long-running HTTP request, and give back control to the main thread as fast as possible. It was quite clear throughout the early ideas of utilizing concurrent HTTP request libraries (Typhoeus) that we were on the right track, but weren’t understanding the problem perfectly.

Part of that was not understanding the GraphQL Ruby library. Part of it was also being fuzzy on our concurrent primitives and libraries. Once we had taken a look at GraphQL Ruby’s lazy loading features, it became clear to us that we needed to kick-off the HTTP request and immediately give back control to the GraphQL Ruby library. Once we understood this, the solution became clear and we became confident after some prototypes that used Futures.

I enjoyed the problem solving process we went through, as well as this writing that resulted from it. The problem solving process ended up teaching the both of us some valuable engineering lessons about collaborative, up-front prototyping and design since we couldn’t have achieved this outcome on our own. Additionally, writing about this success can help others with our direct problem, not to mention learning about the different techniques that we met along the way.

Zero to One Hundred in Six Months: Notes on Learning Ruby and Rails

When they say you’ll like learning and programming in Ruby, they really mean it. From my experience learning and professionally using Ruby and Ruby-on-Rails day-to-day has been quite straightforward and friendly. The rate at learning Ruby and Rails is limited to how fast you’re able to obtain and use that knowledge from either resources online, in a book, or from other people.

It’s common for people who join Shopify to not know how to program in Ruby, yet will be required to. Ruby’s community has grown a great deal of beginner to intermediate guides for newcomers to quickly get up to speed at programming in Ruby. At Shopify, since the feedback is so fast, you’re able to get into an intense virtuous cycle of learning. Since you’re able to code and ship fast, you’re able to learn faster.

From personal experience I found it quite useful to do a deep-dive into Rails before starting to learn the full Ruby language. I focused on reading the entire Agile Web Development with Rails 5 book, which consists of a short primer on Ruby, then the bulk being how to develop an online store using Rails, and lastly an in-depth look into each Rails module. I completed this book over the two weeks before starting at Shopify to give me a head-start at learning.

Roughly the first two months spent at Shopify were a split of working on small tasks by myself, pair-programming with others, and reading a number of Ruby and Rails articles. At the end of two months I found myself being able to take pieces of work from our weekly sprints and completing them to the end without feeling like I was slow, and not requiring a team member to guide me through the entire change.

Code reviews over GitHub on the changes that myself and others have made provided a strong signal on how well my Ruby and Rails knowledge and style has progressed. Code reviews for my code at the start consisted of a lot of comments on style and better methods to use. As more and more code reviews were performed over time my intuition and knowledge increased, resulting in better code and less review comments. The bite-sized improvements gained in each code review slowly built up my knowledge and helped guide me towards areas of further learning.

Mastering Ruby and Rails is gained over months to years of constant use. This is where the lesser-known to obscure language features are understood and put to use, or explicitly not put to use (I’m looking at you metaprogramming!) Some examples being the unary & operator, bindings, and even Ruby internals such as object allocation and how Ruby programs are interpreted.

Coming from the statically-typed Java world, Ruby and Rails is INSANE with the things you can do since it is a dynamic language. My favourite dynamic language related features so far are Module#prepend for inheritance, and the ability to live-reload code.

After a sufficient amount of time gathering knowledge and experience, you gain the ability to help others along their path of learning. This not only benefits their understanding, but it also reinforces your knowledge of the subject.

Some of the things I look forward to in the future are learning about optimizing Rails apps, dispelling metaprogramming, reading Ruby Under a Microscope, and digging into the video back-catalogue of Destroy All Software. I hope you have a good journey too!

What the hack? – or how my first capture the flag went

The 2017 BSides Security Conference, just outside of Ottawa, was a two day event from October 5th to 6th. It was packed with talks, lock picking, and a capture the flag (CTF) competition. Pretty great for being a free conference.

On the second day of the conference I decided to join one of the Shopify CTF teams since it looked like a ton of fun. Actually, I think it was the deep house playing 24/7 was what lured me into the dim and crowded CTF room of the conference centre. I subbed in for one of my friends on Shopify’s Red Team, which was suitably named for Shopify’s second CTF team. Shopify’s first team was named the Blue Team.

I thought I knew what CTF’s were all about – hacking challenges, they say. But I was completely unprepared. My “so called” 10+ years of listening to the Security Now podcast didn’t exactly prepare me for the hands-on experience required for CTFs. It was quite the learning experience since most of the flags remaining on day 2 were difficult to capture for a newbie.

Having some of a background in security and hacking helps, though it doesn’t bridge the gap between the hacking experience and intuition required to solve CTF challenges. These challenges require experience and practice in thinking like an attacker.

For example, it’s one thing to understand that data can be hidden in images via steganography, but it’s another thing completely to actually extract the hidden data from an image.

Instead of wasting time on finding unknown flags, I focused on the topics I have experience with. Most of the flags I focused on were WEP and WPA cracking with aircrack-ng, and it’s associated collection tools. I was not able to inject packets with my setup, but luckily some other competitors did the hard work for me. After a few hours of unsuccessful attempts to crack the Wi-Fi networks I conceded that my attempts weren’t working.

I moved onto a new flag that involved breaking into an old exploited version of Joomla. After asking for some help from a teammate we found a script on exploit-db that would raise privileges to admin for any user. After running the exploit it took me a bit to figure out that it ran successfully since the flag was locked inside a Page that was locked for editing by someone else. The ‘locked for editing’ didn’t allow reading the Page, but after figuring out that the Page had a context menu to unlock it enabled me to view the flag. That made me facepalm both at Joomla’s UI and my inability to figure that out sooner.

After a day filled with a of couple muffins, a few slices of pizza, and countless teas the CTF concluded around 5pm. Winners were announced and thankfully our team didn’t fail too hard. I came out of the competition having met a bunch of colleagues from different parts of the company, and the expectation of what to expect in future CTFs. I’ll definitely be attending another CTF.

My team, the Red-Team, placed somewhere around 5th or 6th. Not too bad for having a handicap on it like myself. I got to hand it to Shopify, they have some seriously talented Security folks! No wonder Shopify’s Blue-Team came in first!

Twenty-Three!

I’m Twenty-Three dude! The first of October fell on a Sunday this year. The personal milestones and events that have occurred certainly make this a year for me to remember. Here’s a few of the main ones to note:

  • Completed my Bachelor of Computer Science – five years of hard work has finally(?) paid off
  • Started a new job at Shopify – one of the hottest, fastest growing, and prestigious unicorn companies
  • Ran my first 10k race during the Ottawa Race Weekend
  • Travelled along the California Coast, around the heart of New York, and all over Montreal

The celebrations started with Saturday morning. A bunch of friends and colleagues of mine grabbed breakfast to wish a colleague farewell and best wishes being back at school.

The crowd storms the field as Carleton wins at Sportsball!

Just like last year, a larger group of us met up at the TD Place for the annual university football Panda Game between Carleton and Ottawa U (Carleton won, of course :P). A lot of time was spent laughing at our once younger selves acting foolish in the crowd. After the win we spent a few hours pigging out at an all-you-can-eat sushi restaurant.

To finish the day off we visited the pool and hot tub at a friend’s apartment, and played the card game What do you Meme? Sunday, my actual birthday, has been spent relaxing and writing this post. I’m sure my colleagues will have more festivities planned for next week.

Additionally, this year I got more into road biking – putting just over a thousand kilometres on a new road bike, and visiting a few new locations in the Ottawa-Gatineau area – Cafe British in Alymer, Quebec is worth the trek on a nice sunny day.

Since I’ve been interested in craft beers, this year I received a homebrewing kit as a gift. The process is laborious for a day or two but is quite rewarding at the end. I’ve completed two large batches so far – a wheat beer and an IPA. They’re pretty drinkable, but the quality isn’t high enough for the ingredients I used. I’ll definitely have to make a higher quality batch of beer, or even delve into making wine.

At the end of October last year I travelled with my Aunt and a few of her friends to Manhattan in New York. What a city! The sheer size and bustle of everything going on at all hours of the day is intoxicating. The restaurant scene was tasty, even when eating within a reasonable budget. The attractions and Central Park were fascinating. I would recommend any first-timers to get a City-Pass to get access to the major attractions. It was definitely a memorable week.

Me shaking Leo’s hand at the TWiT studios.

For a graduation trip in June I flew over to California with my mom. We spent a solid 10 days rv-ing and camping from north of San Francisco all the way down to San Diego. The climate, scenery, and attractions made it an amazing experience. We stopped by the TWiT studios to see Leo Laporte, a longtime mentor of mine for 11 years. Leo’s podcasts have taught me a staggering amount about technology news and computer security. It was great to have been able to see him in person and watch a live recording of Security Now with Steve Gibson.

A San Diego sunset.

During the trip we had to stop by the Computer History museum. Holy cow! That museum is big. I could have spent a few days there reading everything. One particular part of the museum I found memorable was showing my mom a diagram of the history of programming languages, pointing to the ones I knew, and the ones I would soon learn at Shopify. Seeing some of the first Google servers were pretty nostalgic in a way with how scrappy they were to get a lot of computing power for cheap.

Some of the rough goals I have this year are to advance my career, improve on my social skills, become friends with more people, and build up some muscle. I have surprised myself already with the speed and progress that has been made across these goals so far.

Since starting work at Shopify the thought of moving closer to work has really been bugging me. A 20 minute bike ride is great in the summertime, but Winter is Coming – and Ottawa doesn’t fall short with its winters. The social scene with colleagues and all the activities that are had downtown are more attracting factors for moving closer. I’m really liking the idea of moving into a 1-bedroom apartment – gaining more privacy and being able to focus more on myself compared to living with my entertaining and sometimes distracting roommates (in a good way).

I’m not sure how I can top the major events from this past year. Maybe moving into my very own place, attending a conference or two, performing my first conference talk, or even some travelling with friends? I’m down for all of those.

Installing Go the Right Way

100% Derpy

It’s a pain to get the latest version of the Go programming language installed on a Debian or Ubuntu system.

Oftentimes if your operating system is not the latest then there is a slim chance that there will be an up to date version of Go available. You may get lucky and find newer versions of Go in some random person’s PPA, where they’ve backported newer versions to older operating systems. This works, but newly released versions are reliant on the package maintainer to update and push it out.

Other options of installing the latest version of Go may involve building the package from source. This method is often tedious and can be error prone with the number of steps involved. Not exactly for the faint of heart.

Command line tools have been built for certain programming languages to streamline the installation of new versions. For Go, GVM is the Go Version Manager. Inspired by RVM, the Ruby Version Manager, GVM makes it quite simple to install multiple versions of Go, and to switch between them with one simple command.

The only downside that GVM has is that it’s not installed via a system package (eg. a deb file). Don’t let that worry you too much though! Installation is as simple as running the following curl-bash, and then using the GVM command to start installing different versions of Go. Here’s the installation guide/readme.

bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)

One confusing point when using GVM to install the latest version of Go resulted in a failed installation. This made no sense. Eventually RTFM’ing resulted in understanding that you first have to install an earlier version of Go to “bootstrap” the installation of any version of Go later than 1.5. Explained here in more detail.

After following their instructions to install Go 1.4 it was now possible to install the latest version of Go and get on with coding!

Private Docker Repositories with Artifactory

A while ago I was looking into what it takes to setup a private Docker Registry. The simplest way involves running the vanilla Docker Registry image and a small amount of configuration (vanilla is used to distinguish the official Docker Registry from the Artifactory Docker Registry offering). The vanilla Docker Registry is great for proof of concepts or for people who want to design a custom solution, but in organizations where there are multiple environments (QA, staging, prod) wired together using a Continuous Delivery pipeline – JFrog Artifactory is well suited for the task.

Artifactory, the fantastic artifact repository for storing your Jars, Gems, and other valuables has an extension to host Docker Repositories to store and manage Docker images as first-class citizens of Artifactory.

Features

Here’s a few compelling features which make Artifactory worthwhile over the vanilla Docker Registry.

Role-based access control

The Docker Registry image doesn’t come with any fine-grained access control. The best that can be done is either allowing or disallowing access to all operations on the registry by the use of a .htpasswd file. In the best scenario, each user of the registry has their own username and password.

Artifactory uses its own fine-grained access control mechanisms to secure the registry – enabling users and groups to be assigned permissions to read, write, deploy, and modify properties. Access can be configured through the Artifactory web UI, REST API, or AD/LDAP.

Transport Layer Security

If enabled, Artifactory will use the same TLS encryption it uses for Docker Registries. Unlike a vanilla Docker Registry, there is no need to setup a reverse proxy to tunnel all insecure HTTP connections over HTTPS. The web UI offers a screen to copy and paste authentication information for connecting to the secured Artifactory Registry.

Data Retention

Artifactory has the option to automatically purge old Docker images when the unique number of tags has grown to a certain size. This keeps the number of available images, and therefore the storage space, within reason. The results of not having old images purged can lead to running out of disk space, or for you cloud users, expensive object storage bills.

Image Promotion

Continuous delivery defines the concept of pipelines. These pipelines represent the flow of commits from when a developer checks in their code to the SCM, all the way through CI, and eventually into production. Continuous Delivery organizations who have chosen to use multiple environments for validating their software changes would “promote” a version of the software from one environment to the next. A version would only be promoted if it passed the validation requirements for that environment.

For example, the promotion of version 123 would first go through the QA environment, then the Staging environment, then the Production environment.

Artifactory includes Docker image promotion as a first-class feature, separating it from the vanilla Docker Registry. What would be a series of manual steps, or a script to run, is now a single API endpoint to promote a Docker image from one registry to another.

Browsing for Images

The Artifactory UI already has the ability to look at various artifacts contained in Maven, NPM, and other types of repositories. It was only natural to offer the same service for Docker Registries. All images of a repository can be listed and searched upon. Images can be further described by showing the various tags and layers that compose it.

The current vanilla Docker Registry doesn’t have a GUI. It is only through third-party projects that a GUI can be provided to offer the same functionality as Artifactory.

Remote Registries

Artifactory has the ability to provide a caching layer for registries. Performance is gained when images and metadata are fetched from the cached Artifactory instance, preventing the time and latency incurred from going to the original registry. Resiliency is also gained since the Artifactory instance can continue serving cached images and metadata to satisfy client requests even when the remote registry has become unavailable. (S3 outage anyone?)

Virtual Registries

Besides hosting local and caching remote registries, virtual registries is a combination of the two. Virtual registries unite images from a number of local and remote registries, enabling Docker clients to conveniently use just a single registry. Administrators are then able to change the backing registries when needed, requiring no change on the client’s side.

This is most useful for humans who need ad hoc access to multiple registries that correspond to multiple environments. For example, the QA, Staging, Production, and Docker Hub registries can be combined together, making it seem like one registry to the user instead of four different instances. Machines running in the Production environment, for example, could only have access to the Production Docker Registry, thereby preventing any accidental usage of unverified images.

Conclusion

Artifactory is a feature rich artifact tool for Maven, NPM, and many other repository types. The addition of Docker Registries to Artifactory provides a simple solution that caters towards organizations who are implementing Continuous Delivery practices.

If you’re outgrowing an existing vanilla Docker Registry, or entirely new to the Docker game then give Artifactory a try for your organization, it won’t disappoint.

Practicing User Safety at GitHub

GitHub explains a few of their guidelines for harassment and abuse prevention when they’re developing new features. Some of the interesting points in the article include a list of privacy-oriented questions to ask yourself when developing a new feature, providing useful audit logs for retrospectives, and minimizing abuse from newly created accounts by restricting access to the service’s capabilities. All of these points taken into consideration make it harder for abuse to occur, making the service a better environment for its users.

See the original article.

A few Gotchas with Shopify API Development

I had a fun weekend with my roommate hacking on the Shopify API and learning the Ruby on Rails framework. Shopify makes it super easy to begin building Shopify Apps for the Shopify App Store – essentially the Apple App Store equivalent for Shopify store owners to add features to their customer facing and backend admin interfaces. Shopify provides two handy Ruby gems to speed up development: shopify_app and shopify_api. An overview of the two gems are given and then their weaknesses are explained.

Shopify provides a handy gem called shopify_app which makes it simple to start developing an app for the Shopify App Store. The gem provides Rails generators to create controllers, add webhooks, configure the basic models and add the required OAuth authentication –  just enough to get started.

The shopify_api gem is a thin wrapper of the Shopify API. shopify_app integrates it into the controllers automatically, making requests for a store’s data very simple.

Frustrations With the API

The process of getting a developer account and developer store created takes no time at all. The API documentation is clear for the most part. Though attempting to develop using the Plus APIs can be frustrating when using the APIs for the first time. For example, querying the Discount API, Gift Card API, Multipass API, or User API results in unhelpful 404 errors.  The development store’s admin interface is misleading as a discounts section can be accessed where discounts may be added and removed.

By default, anyone who signs up to become a developer only has access to the standard API endpoints, leaving no access to the Plus endpoints. These Plus endpoints are only available to stores which pay for Shopify Plus, and after digging into many Shopify discussion boards it was explained by a Shopify employee that developers need to work with a store who pays for Shopify Plus to get access to those Plus endpoints. The 404 error when accessing the API didn’t explain this and only added confusion to the situation.

One area that could be improved is that there is little mention of tiered developer accounts. The API should at least give a useful error message in the response’s body explaining what is needed to gain access to it.

Webhooks Could be Easier to Work With

The shopify_app gem provides a simple way to define any webhooks that should be registered with the Shopify API for the app to function. The defined webhooks are registered only once after the app is added to a store. During development you may add and remove many webhooks for your app. Since defined webhooks are only registered when the app is added to a store the most straightforward way to refresh the webhooks is to remove the app from the store and then add it again.

This can become pretty tedious which is why I did some digging around in the shopify_app code and created the following code sample to synchronize the required webhooks with the Shopify API. Simply hit this controller or call the containing code somewhere in the codebase.

If there’s a better solution to this problem please let me know.

Lastly, to keep track of your sanity the httplog gem is useful to track the http calls that shopify_app, shopify_api and any other gem makes.

Wrapping Up

The developer experience on the Shopify API and app store is quite pleasing. It has been around long enough to build up a flourishing community of people asking questions and sharing code. I believe the issues outlined above can be easily solved and will make Shopify a more pleasing platform.

The Software Engineering Daily Podcast is Highly Addictive

Over the past several months the Software Engineering Daily podcast has entered my regular listening list. I can’t remember where I discovered it, but I was amazed at the frequency at which new episodes were released and the breadth of topics. Since episodes come out every weekday there’s always more than enough content to listen to. I’ve updated My Top Tech, Software and Comedy Podcast List to include Software Engineering Daily. Here are a few episodes that have stood out:

Scheduling with Adrian Cockroft was quite timely as part of my final paper for my undergraduate degree focused on the breadth of topics in scheduling. Adrian discussed many of the principles of scheduling and related them to how they were applied at Netflix and earlier companies. Scheduling is really a necessity for software developers to know as scheduling occurs in all layers of the software and hardware stack.

Developer Roles with Dave Curry and Fred George was very entertaining and informative as it presented the idea of “Developer Anarchy”, a different structure to running, (or not running), development teams. Instead of hiring Project Managers, Quality Assurance, or DBAs to fill a specific niche of a development team, you mainly hire programmers and leave them to perform all of those tasks according to what they deem is necessary.

Infrastructure with Datanauts’ Chris Wahl and Ethan Banks entertained as much as it informed. This episode had a more casual setting as the hosts told stories and brought years of experience to the current and future direction of infrastructure in all layers of the stack. Comparing the current success of Kubernetes to the not-so-promising OpenStack was quite informative as it showed that multiple supporting organizations drove the OpenStack project to have different priorities and visions, whereas Google, being the single organization to drive Kubernetes, is shown to have one single, unified vision.


EDIT 2017-02-26 – Add Datanauts episode