In my previous post, I talked about Eric Ries’ The Lean Startup and how many of the ideas in the book have lost their original depth and nuance as they’ve gained popularity.

For example, an Minimum Viable Product (MVP) is supposed to be “the fastest way to get through the Build-Measure-Learn feedback loop with the minimum amount of effort.” (p. 93). And yet, a recent post on the Carbon Five blog defines an MVP this way:

An MVP is really something small– so small that it’s probably embarrassing. You just want to prove you can solve the problem and create some passionate users.

This definition has nothing about learning from the MVP or using it to test assumptions, two of the key features of it. While I’m singling out one particular post, this definition or something like it is repeated often.

In my post, I pointed out the irony that Ries himself cautioned against this (p. 279):

Throughout our celebration of the success of the Lean Startup movement, a note of caution is essential. We cannot afford to have our success breed a new pseudoscience around pivots, MVPs, and the like.

I find that this kind of dilution happens a lot. Here are a few more examples.


Ries points out another example of dilution when discussing Frederick Winslow Taylor’s The Principles of Scientific Management. In the agile community, Taylorism, as Scientific Management has come to be known, is treated as a horrible idea that must be countered. Kent Beck recently tweeted:

But Ries points out that (p. 272):

Taylor invented modern white-collar work that sees companies as systems that must be managed at more than the level of the individual.

He then talks about the dilution that has happened since (p. 272):

Whereas Taylor preached science as a way of thinking, many people confused his message with the rigid techniques he advocated: time and motion studies, the differential piece-rate system, and – most galling of all – the idea that workers should be treated as little more than automatons. Many of these ideas proved extremely harmful and required the efforts of later theorists and managers to undo.

and (p. 277):

Unfortunately, Taylor’s insistence that scientific management does not stand in opposition to finding and promoting the best individuals was quickly forgotten. In fact, the productivity gains to be had … were so significant that subsequent generations of managers lost sight of the importance of the people who were implementing them.

If Ries is right, Taylor has been vilified for things that go against the core principles of his message.

I’m not sure which view is right; I need to read the original source and decide for myself.


You may be familiar with the traditional software development methodology known as Waterfall. First described in a 1970 paper by Winston Royce (PDF), Waterfall divides the software development process into sequential phases such as System Requirements, Software Requirements, Analysis, Program Design, Coding, Testing, and Operations.

This model went on to become the dominant software methodology for many years.

But even Royce admits in his paper that the ideal Waterfall model doesn’t actually work in practice. He first talks about some iteration between neighboring phases:

The ordering of steps is based on the following concept: that as each step progresses and the design is further detailed, there is an iteration with the preceding and succeeding steps but rarely with the more remote steps in the sequence.

and then a much more wide-ranging violation of the sequential nature of the process:

I believe in this concept, but the implementation described above is risky and invites failure. … The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial differential equations of mathematical physics for instance. Yet if these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. A simple octal patch or redo of some isolated code will not fix these kinds of difficulties. The required design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violated. Either the requirements must be modified, or a substantial change in the design is required. In effect the development process has returned to the origin…

So for many years, most software development was done using an idealized model of software development whose proponent recognized that the ideal model was insufficient.

Agile / Extreme Programming

Even the agile software movement, and especially Beck’s own Extreme Programming (XP), which are in part a reaction to Taylorism and Waterfall, have been subject to this kind of misinterpretation and dilution. Many companies have hopped on the agile bandwagon but have no idea of the true roots of the movement, or even the contents of the Agile Manifesto.

The agile community has talked about this for many years, even using derogatory terms such as Flaccid Scrum and Scrum But to describe these watered down variants.

In her recent post, When Standup Can’t Be Standup, Sarah Mei reminds us that “Agile isn’t a checklist”:

Agile is a set of philosophies focused on maximizing relevant information. These philosophies lead, in many cases, to a similar set of practices. But all practices must evolve over time.

When they don’t, when they get disconnected from their grounding philosophies, they become make-work and start appealing only to the unthoughtful folks who want a set of rules to enforce. And believe me, you do not want to attract those people to your projects.


Why do powerful and popular concepts like these get diluted over time? I think there are a number of reasons.

Ideas evolve. Once someone has written a book on a subject, the book stays static but the author continues to think about the ideas and develop them further. People try things and report back about what works and what doesn’t.

People have different interpretations of what they read and hear. You and I can read the exact same book and come away with different understandings because of our past experiences, existing knowledge, and biases.

Ideas get passed along from person to person. Every time an idea is passed along, it loses some of the original flavor, and picks up some new flavor from the person passing it along. Much like the old Telephone Game, messages get changed and details get lost along the way.

I think Gerald Weinberg said it best in his classic book, The Secrets of Consulting, calling this dilution “The Law of Raspberry Jam”:

The wider you spread it, the thinner it gets.

What Do We Do About It?

I’m not sure we can solve the dilution problem entirely. There are just too many ideas that affect our daily lives.

However, I think that it is important to pick some of the major ideas that you interact with every day and go back to the source material.

Read or re-read the early books or papers on a subject and truly understand what the originators meant.

Compare that with your current understanding or practice and see what’s different.

Do more research to see whether the differences were due to new understanding and positive evolution of the material, or if they were due to misunderstandings or misrepresentations.

Consider going back to the original formulation of the ideas and try them out from first principles.

See what you learn from the experience, and then tell us about it.