Voliatlity Based Decomposition

facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

Introduction

As software architects, our responsibility is to design systems that not only meet all current requirement but all future requirements as well.  This is a pretty tall order and not easily accomplished but is possible.  Just because we have not done this (or have not seen it done) does not mean it can not be done (absence of evidence is not evidence of absence).  In this blog post, I’ll uncover one of the ‘secrets’ to building future-proof systems.

The Common Path

One commonly used strategy to accomplish future-proofing is to pull out our crystal ball and over-engineer like hell and pray for the best.  I have seen this in code bases that rely on one particular database technology but provide abstractions and implementations for supporting many types of databases (SqlServer/Oracle).  I have seen it in code bases that write complex frameworks to support deviations in workflows for every client but in reality there is only one global workflow that is cloned for every client.  The strategy of ‘solve for everything’ is a common misstep we have all made as software engineers at one point or another.

The problem with this crystal ball approach is we start paying for the technical debt and may never realize value from our efforts.  Sometimes the crystal ball gives us requirements that aren’t quite right and need to be re-written before we can take advantage of them.  Sometimes the technical debt we have incurred negates the future value of the crystal ball requirement.

As soon as we start guessing what the next requirement will be and over-engineering to provide for it, we start incurring technical debt.  The damage is compounded by lost opportunity cost of working on features that contribute to actually shipping the product or being used by customers.  Once the over-engineering is complete we start paying for the additional complexity in the system and are not able to realize the value later (if ever).  This usually means that the additional complexity will now causes feature release momentum to be slower and support costs to be higher.

CoplexityGraph

If we were architects in charge of designing and building a house, we are essentially building the foundation for a skyscraper and putting a small one-family house on top of it when these are two entirely different problems.  If the problem domain changes from a single-family dwelling to a skyscraper there is a fundamental change in the nature of the problem and we really should start from scratch to meet the new problem set.  Besides, we could never afford to build a skyscraper base for our personal homes and wouldn’t because it is entirely unnecessary.  This doesn’t make sense in the real world, so why do we think it is acceptable in the software business?

Functional Decomposition

The curse comes from functional decomposition.  Functional decomposition is building system architecture from a functional or time based view of the problem domain where each module is based on an order or logical steps in a use cases.  Functional decomposition leads to duplication of behaviors across modules and an explosion of modules with intricate relations inside and between them.  This approach couples multiple modules to a data contract and promotes implementing use cases in higher level terms withing a higher level module which make it difficult to reuse the same base behavior in other use cases.  Functional decomposition also makes it difficult to provide a single point of entry.  With functional decomposition, clients start to get bloated as they stitch together services to provide features and modules start to tightly couple to other modules at the same level.

As an anti-design example, let’s look at building a car from a functional perspective:

functional_decomposition_01

Here, we create a module for each function we need to perform within the domain of driving a car.

If we have to implement ‘Driving’ how many sub-modules would it require?

  • Accelerating
  • Breaking
  • Steering
  • Roads
  • Stop Lights
  • Traffic Rules
  • etc…
  • etc…

Besides, how would you implement just ‘Accelerating’ as a stand-alone operation?

So you can see how this functional decomposition will soon explode into a tightly coupled mess…

How about domain decomposition?

functional_decomposition_02

So far so good… Now comes the feature churn: What if we need a trunk to haul groceries?   What if we need cars with trunks and without trunks? Automatic and manual steering?  Power and manual windows?  How about cars with the driver seat on the right or on the left?

Where’s the abstraction and encapsulation?

We have many inter-dependencies between our modules and will end up re-re-re-re-factoring the car to handle our use cases which will break the consumer of the car module each time.

How do you know the patient (your architecture) is sick due to functional decomposition?  Here are some symptoms:

  • Operations your patient performs are grouped based on the business objects they require.
  • Your patient has a boated front end with business logic shoved in the presentation layer.
  • The entire patient is very data-centric and CRUD-ish.
  • Functionality is spread across multiple areas and composed on smaller pieces of functionality.
  • Making a change involves making changes in multiple places.

Sounds pretty bad, but not all hope is lost.  We have to treat the underlying cause of the problem in order to fix the symptoms.

What’s the cure?….

Volatility Based Decomposition

With volatility based decomposition, we start decomposing modules of the architecture based on volatility.  This works for getting both greenfield and brownfield projects straightened out.  The first order of business is to identify areas of potential change.  These ares of change can be functional but should not be confused with domain functional behaviors.  Next, we encapsulate the volatile areas in modules.

The milestones of a volatile based architecture are based on integrations instead of features.  Features are implemented as interactions between services and modules. Now that we are combining modules to produce features (behavior) we have essentially built a factory to produce features at rapid pace because it does not require system level changes.  Every new use case should just be a matter of integrating different modules or the same modules in a different order.

JengaThis is a universal principle of good design and is not limited to software.  We encapsulate change to insulate a system from cascading side-effects from that change.  In a system build on volatility based decomposition, we no longer have to worry about how new features will break the existing system.

This takes much longer and more up-front architecture time than to start a project with functional decomposition but will pay off as we can crank out features on an assembly line rather than requiring top-notch engineers to carefully orchestrate each minute change to a fragile system in a masochistic game of Jenga.

 

How To Find Volatility

We can find volatility by analyzing the problem space in a couple of ways:  Figuring our what will change over time and what will change across clients.  Each of the areas identified to have volatility should not have cross dependencies.

Here is a rough example:

If we are building an application to simulate driving from point A to point B, what could change over time?

  • The Driver
  • The Vehicle
  • Road conditions

How about volatility across different countries?

  • Directions
  • Driving Regulations

volatility_decomposition_01

We are on our way. The point during requirements gathering is to figure out what could change and document this as central components of our architecture.  Some of these volatile areas may be our of scope.  We need to identify them as early as possible and designation of a module on an architectural diagram before it is built costs nothing.  Later on we can figure out if we want to allocate effort to design and construct the module.  After requirement gathering is well underway we can start settling on the areas of volatility and encapsulate them in modules of the architecture.

By the way, was ‘driving’ actually ever a real requirement?  Sometimes what we receive as requirements are poor solutions to the underlying problem.  The actual requirement may be to get a package from point A to point B.  If it turns out point A is directly upstairs from point B, then driving is no longer a rational solution and was a poor design choice.  If we have built a solution with functional decomposition based around automobiles, we would have to rebuild the architecture from scratch.

The Process

After we have gone through the requirements and have a list of areas of volatility, we will hardly ever have a pure 1:1 ratio between modules and areas of volatility.  Sometimes a single module can encapsulate multiple areas of volatility.  Some areas of volatility may map to operational concepts and can not be reasonably encapsulated.  Other areas may be encapsulated by third party components/services.

Some mapping will be straightforward.  For example, data storage volatility is encapsulated behind a data access service.  This encapsulation protects our system from changes in 1) where the storage is and 2) what technology is used to access it.  We don’t want to our module to refer to the ‘Database’ but rather focus on being ‘Storage’ because there may be multiple ways to store information that can change over time (e.g. caching, NoSql Databases, Relational databases, etc).

Commonly occurring modules:

  • Notification is a way to decouple modules as a utility.  Often, a simple pub-sub service/module will suffice to reduce coupling between modules and provide a notification mechanism.  Other times, we’ll have volatility in who the clients are and how we would notify them.
  • Security is often volatile in who will have access to what.  More specifically, an authorization utility module is a great way to encapsulate these types of system changes.  Authentication could either be volatile or static.
  • Volatility in message structure(s) can be encapsulated in ‘Transformation’ modules that will be responsible for converting different message types to ones understood by the system.
  • Persisting workflow can encapsulate volatility that occurs when a user will have long-running workflows across multiple sessions, connections or devices.
  • Rendering volatility may be encapsulated to control changes in layouts, color preferences, etc.
  • Locale volatility can be encapsulated as a utility module that would primarily be consumed by the client layer.

What Changes?

Avoid encapsulating changes to the nature of the business. Changes in the core business are actually very rare. We risk pouring a foundation for a skyscraper for our single-family home here.

Lets investigate the nature of change a bit.  At the beginning of (our) time, man walked.  Later on, we developed trains and then after that automobiles.  In another 10,000 years what is the probability that we’ll still be driving cars? What’s the probability that we’ll still be walking?  Here is another example: In 10,000 years what do you think is the probability will be that we will still be reading Harry Potter books?  What about the Bible?  We all intuitively understand the nature of change:  The longer things have not changed the long until they will change.

We need to keep this in mind as we do volatility based decomposition.  If the nature of our business/software is to help conduct clinical trials and that is what we have been doing as a business from the beginning, helping to conduct clinical trials will most likely be the business for the full life-cycle of our software.  It would not make sense to build abstractions around clinical trials because helping to conducting clinical trials is what should be produced from the software as a whole by composing the modules within the architecture.

Say let’s say we have revamped the documentation process of our system in the last two years.  There is a high probability that it will change again in two years and we should consider it as one possibility for an area of volatility if we plan on having our software in production for the next 5-7 years or more.

Sometimes there are constant changes due to the nature of the business and these will continue to change at a pretty constant rate.  For example, there will always be new patients and new clinical trials in most of our systems.

Another think that is crucial to take into account is what change could have a negative impact on the system that we need to account for now.  For example, if we know there will be an explosion of data due to capturing information from medical devices in the future, we should probably encapsulate the idea of a volatile measuring device (encapsulating what takes measurements to include both manual measurements taken by people and automated measurements taken by machines).

Anything not modeled in the architecture is assumed to not be volatile enough to warrant encapsulation.

Conclusion

Volatility based decomposition is an approach to build effective feature factories where features are implemented by composing modules of a well architected system.  Each module encapsulates and insulates us from system volatility/change so the overall system is significantly more stable than with feature based decomposition. Volatility based decomposition takes significantly more thought, work and skill than just doing feature based decomposition, but allows us to design systems that are future-proof and can stand up to constantly changing requirements and allow us to build feature factories where we can quickly crank out new features by composing modules just like an assembly line.

If you want to dig in more to this topic check out presentations on the topic by Juval Lowy and the IDesign method.

facebooktwittergoogle_plusredditpinterestlinkedinmailfacebooktwittergoogle_plusredditpinterestlinkedinmail

One thought on “Voliatlity Based Decomposition

Leave a Comment