A long time ago (as in, 15 years), in a place called Colorado, there was a company named ServiceMagic. Over the years, this company, now named HomeAdvisor, built software applications to support the business of connecting service professionals with homeowners.
They wrote a lot of software.
A single web application became two, then three, then a half dozen or so. All of these applications shared the same underlying set of code libraries to run the core business functions. No, that’s not quite true. All these applications used the same, single library, that contained all the business logic necessary to run the company. Attempts have been made over the years to try and carve out pieces of this code base into separate modules, with varying degrees of success.
As we pick up the story two years ago, there was really only one way to describe the architecture of the software used to run the company known as HomeAdvisor…
Software Monolith Architecture Problems
We may have multiple web applications, each specialized on one area of the business (e.g. consumer website, customer management, background processes, etc.), but they are all built together, they are all tested together and they are all released together. Some of these applications even do different things, depending on where they are deployed or how they are configured. In addition, if there is a surge in web traffic to a particular area of our website, horizontally scaling means spinning up additional, reasonably large VMs with copies of the application serving that traffic. This is expensive, as each VM slot has a cost associated with it: VM space, support costs, and potentially licensing costs for any commercial software used.
This is a problem.
The company is growing. We’re hiring more and more developers, and it’s becoming harder and harder for those new developers to understand the system, with all of its interconnected tangles of code, and not step on each other. The smallest change in module used by one web application, can easily have unintended consequences in the other applications. This means everything must be built, tested and deployed!
How did we get here? The answers are many, but there are three worth calling out:
- http://allweare.org/tetracycline Business Speed – As anyone who has been involved in a startup knows, speed to market is an important metric. As the company was just getting started, getting something, anything, out the door was of the utmost importance. Up until a couple of years ago, releases to production were weekly. In that environment, taking the time to consider the architectural implications of every product feature added tends to go out the window, more often than not. So, pragmatism took over and developers added code to the place that was easiest: the existing web application
- http://mobilepaymentux.com/Xw7dThLKg Server Cost – Back in 1999, when the company was founded, server space was expensive. AWS and the like didn’t exist so you had to run your own hardware. And by today’s standards, that hardware was expensive. So in order to get the most value out of each server, they were loaded up with as much as they could handle. In addition, Java being what it is, there was no way anyone wanted to run more JVMs on a server than was absolutely necessary. Once again, pragmatic developers win the day and stuff as much into each application as possible.
- Tools – For the longest time, all the source code for the system was stored in CVS and PVCS before that. Thankfully we have since moved to using Git as our source code control system. However, the legacy of CVS still haunts us. For those unfamiliar with the product, CVS manages version at the file level, not patch sets as most of us are used to today. What this meant was that it was very difficult to remove files from the code base. Files needed to be kept around for at least a release or two so that the system could be rolled back to a previous version, if necessary. Of course, nobody ever remembered to go back and remove legacy files unless they started causing problems.
As the company grows, the software architecture needs to evolve to support that growth, both in terms of traffic volume as well as number of developers supporting and adding to the code base. This architecture needs to support:
- Autonomous Delivery Teams – We have organized into small, agile delivery teams that should be able to work (mostly) independently.
- Loosely Coupled Components – Each team should be able to build, test and deploy portions of the system with zero, or at least minimal, disruption to other teams and parts of the system.
- Independently Scaleable – If there’s suddenly a huge demand for widgets, we should be able to horizontally scale the widget service.
- Resilient to Failure – Each part of the system should be able to handle and gracefully degrade when there are failures in other parts of the overall system (and there will be failures).
So what kind of architecture supports these new requirements? Are there patterns others have used successfully? Who do you call when a monolithic system has taken over your company?
Tune in next time we meet the hero of this story, microservices! Find out how this champion does battle the evil monolith. Learn how we have dealt with the inevitable flaws any hero has, including this one.