Over time, large systems come to resemble human civilizations that have evolved over centuries or millennia. Cities are built upon the remains of older cities; cultural norms are handed down from generation to generation; revolutions come and go; architectural styles and tastes change; modernization happens.
As the developers learn and grow, they adopt new design philosophies, techniques, and approaches.
New developers join the project, bringing in new ideas.
Old developers leave the project, reducing the tribal knowledge of the team, but also making it less likely to fall back on the trite excuse of “we’ve always done it this way.”
New tools, languages, and frameworks arise and are adopted.
Consider how a Rails app may evolve over time:
The application starts out as a typical Rails app, following the default conventions.
The controllers start getting fat, brittle, and hard to test, so we start moving behavior into the models following the “skinny controllers; fat models” advice.
The models start getting fat and the tests start running too slow because everything touches the database, so we start moving behavior into service objects.
The jQuery starts getting messy, so we adopt a lightweight front-end framework such as Backbone.
We want more and more dynamic behavior, so we migrate to a full-featured front-end framework like Ember, Angular, or React.
The back-end keeps growing, so we start splitting out separate services into a service-oriented architecture.
The entire time this evolution is happening, the application keeps growing because we need to constantly add new features. In addition, the team has learned better techniques for architecture, design, and testing.
After such an evolutionary path, our system has become like an archaeological dig site. It has “strata” representing the different eras in its history.
The study of human civilizations is called archaeology. Google defines archaeology as:
The study of human history and prehistory through the excavation of sites and the analysis of artifacts and other physical remains.
When new developers come onto the team, they don’t know this history. They don’t know which structures are from which generation of the system. It’s hard to know which patterns to follow and which to migrate away from. They don’t know which parts of the application need to be modernized as they are touched.
If we’re not careful, our developers will only be able to find their way around using “excavation of sites and the analysis of artifacts and other physical remains.” This is not a speedy process.
The more strata in our system and the more outdated our tools and codebase, the slower the team will move when adding new features. New developers will take longer to onboard. It will become harder to attract and keep good developers.
What is the solution to this problem? Should we just keep using our original tools and not adopt anything new? Should we stop the world every time we decide to move forward so we can bring all of the old code up to date with our new ideas? Neither of these seem like a good choice.
Staying in one place is a form of technical debt. The further out of date our tools get, the harder they become to support and maintain.
Moving our tools forward without bringing the old code up to date is also a form of technical debt. Every time we touch some of the old code, we have to remember how we used to do things; or we have to decide that we now need to modernize it, making our current feature that much more expensive to add.
Stopping the world to modernize has a huge opportunity cost. The time spent modernizing is time spent not adding the next feature that will attract new customers.
There needs to be a balance, and the tradeoffs should be made with careful consideration. It isn’t wise to go chasing after every new fad that comes out. But it’s also important to keep your eyes open to see what’s coming next and to make strategic decisions to adopt new technology when there’s a clear benefit to it.
After deciding to adopt something new, the next careful consideration is how much time and effort to put into bringing the old code forward.
I suggest the following approach any time you’ve decided to adopt a new technique, tool, or framework:
Make it very clear to everyone on the team what the history of the system is. What are the different eras the system has come through? Which parts of the system belong to which era? What parts of the system are modern? Where are the best patterns to re-use? What code is considered exemplary? This is especially important information for new developers, so it must be continually socialized through the team on an ongoing basis.
Write all new code the new way.
Immediately modernize strategic parts of the system. There are likely a few features that would really benefit from the new way; they’re probably the ones that tipped the scales in favor of making the change. Take advantage of that. Establish a beachhead for the new way to give it traction and to provide more examples of the new right way to do things.
Any significant changes to older parts of the system should include the effort to modernize. You get to decide what “significant changes” mean in your world, but it’s probably smaller than you think it should be. The bar for modernizing needs to be low.
Maintain an ongoing effort to bring the old code forward. Perhaps a track that always has a fraction of your team assigned, a background task that people work on a bit at a time, a half-day or day per week for everyone, or a once-a-quarter hack week. Focus on modernizing. Again, you probably need to invest more in this effort than you think you should. Work to minimize the number of archaeological strata in your system by modernizing the oldest parts of the application first.
Using this approach will keep you on a healthy, balanced path that will allow your application to live long and prosper (RIP Leonard Nimoy).