Blog Feed Post

On Reuse

As long as I have been in IT - since I coded my first subroutine library 35 years ago - the debate on reuse has been ongoing. At times, it has seemed like a ‘Holy Grail’ for software delivery and like the Holy Grail it also seems to give rise to endless opportunities to debate whether it is exists or not, if it is good or bad, or even what it actually is.

Now entering into another, I thought it worth putting down some observations [1].

Why Reuse?

I am not going to enter into a long discussion about the economics of reuse. Enough has already been written on that.  I see three simple reasons to consider reuse. Every justification to reuse or not boils down eventually to one of more of the following,

  1. Productivity. Improve productivity by reusing what has already been developed and tested.
  2. Consistency. Improve consistency of process and information by ensuring that common functions are always performed in the same way
  3.  Best Practice. Improve the execution of common activities by codifying best practice and algorithms.

Of course all manner of politics and personal bias come into decision making process, but that is a cultural issue, and enough has already been written on that too.

Forms of Reuse 

The assumption is normally that the debate is about software reuse. Ultimately it is, in that the end product of what is being reused is a software artifact. But there are many points in the SDLC at which reuse may occur, and there are many forms in which reuse may take place. For example;

  •  Software Reuse. A unit of software is reused. There are many ways this can be achieved, which doesn’t always involve copying. Instances might be duplicated, or an instance might be shared somehow, but either way developers are aware of the software they are reusing.
  • Service-based Reuse. Software is ultimately reused, but only via its service interface which helps to decouple the consumer from the provider, and better encapsulates the software being reused. Developers are less aware of the software they are reusing, and should only be aware of the service that encapsulates it.
  • Specification or Model Reuse. The specification is reused, but different instances of software are produced from it. Providing the transformation is correct in each instance, this enables consistency but would allow delivery in different technologies for example
  • Pattern-based Reuse. A higher form of abstraction, but a common form of ‘reuse’. More a way of reusing knowledge or ‘best practice’ than software itself.
  • Architecture or Blueprint Reuse.Similarly, an architecture may establish a blueprint that is reused

So a key task is to determine what form of use is most appropriate to achieve the intended goal.

Scope of Reuse 

Establishing the intended scope of reuse of some asset is also key. Often for example, people attempt to reuse of an asset on an ‘enterprise’ basis whereas there is not actually an enterprise-wide requirement. Consequently trying to then ‘force’ the asset on projects (in order to realize the ROI for its development) only leads to conflict.

Rather, the scope of reuse should be set at an appropriate level and the investment in its delivery commensurate with that. The scope could be,

  • For an enterprise, establishing its reuse boundaries
    • Global. Or enterprise-wide.
    • Common. Or domain or division wide, or for a product line.
    • Local. Or project, product or business unit wide.
  • For an Industry, or domain. More applicable to standards organizations, and to commercial software vendors or service providers
  • Ecosystem. Intended to be reused or shared by many ecosystem participants

Where to Reuse  

Besides the issue of scope, clearly not everything needs to be built with reuse in mind. Hence there needs to be additional means by which you can distinguish reusable assets from the non-reusable. For example, using a layered architecture as a way of classifying assets into different layers and then facilitating reuse in and between appropriate layers.

For example, separate assets into different types based on

  • Rate of Change. Things that are more stable in nature are better candidates for reuse than those that frequently change. I would like to thank my friend and colleague Richard Veryard for introducing me to the concept of ‘Shearing Layers’ as a valuable construct in IT architecture
  • Core or Context. As well as stable vs unstable, you might also classify assets using core or context analysis [2], where context would be most suitable for reuse.
  • Separation of Concerns. The traditional separation of presentation, process and data also helps to determine reuse.  Data assets can be reused in many processes. Processes (or parts of) can be reused in many solutions. Utilities can be reused everywhere.

One of the above classifications may not be sufficient to determine whether an asset should be reusable or not, but used together, along with scope, they can certainly help to narrow down whether or not an asset should delivered with reuse in mind.

Designing for Reuse 

Having determined that an asset should be reusable, it is essential that it should be designed for reuse. A full discussion is beyond the scope of this short note, but key factors will be

  • Granularity. Finer grained, tightly focused assets will more likely to be reusable in a wider spectrum of different contexts. However, that doesn’t mean that coarse-grained software deliverables are not widely reused in term of the number of instances deployed. Rather, that they are less likely to be reused in different contexts to the one for which they were originally designed.
  • Generalization. The more generalized an asset it, the broader applicability it will have
  • Configuration. The greater number of ways an asset can be configured to suit different purposes will make it more reusable in different solutions.


I am not suggesting these are the only ways by which to determine reuse. More, it a suggestion that only by developing such a framework within your organization can you make rational reuse decisions, and perhaps more importantly govern reuse with effective policies that codify such rules.

[1] I have documented much of this before, but it is mainly behind our pay-wall.

[2] Dealing with Darwin. Geoffrey A. Moore. 2005 http://www.dealingwithdarwin.com/index.php

Read the original blog entry...

More Stories By Lawrence Wilkes

Lawrence Wilkes is a consultant, author and researcher developing best practices in Service Oriented Architecture (SOA), Enterprise Architecture (EA), Application Modernization (AM), and Cloud Computing. As well as consulting to clients, Lawrence has developed education and certification programmes used by organizations and individuals the world over, as well as a knowledgebase of best practices licenced by major corporations. See the education and products pages on http://www.everware-cbdi.com

Latest Stories
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark...
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, paneli...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
SYS-CON Events announced today that TMC has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo and Big Data at Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Global buyers rely on TMC’s content-driven marketplaces to make purchase decisions and navigate markets. Learn how we can help you reach your marketing goals.
Managing mission-critical SAP systems and landscapes has never been easy. Add public cloud with its myriad of powerful cloud native services and this may not change any time soon. Public cloud offers exciting new possibilities for enterprise workloads. But to make use of these possibilities and capabilities, IT teams need to re-think everything they have done before. Otherwise, they will just end up using public cloud as a hosting platform for their workloads, aka known as “lift and shift.”
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
SYS-CON Events announced today that TechTarget has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets.
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.