1. The Process Integration Trilogy: Software-driven Labour

Reading Time: 9 minutes

ॐ पूर्णमदः पूर्णमिदम् पूर्णात् पूर्णमुदच्यते
पूर्णस्य पूर्णमादाय पूर्णमेवावशिष्यते
ॐ शान्तिः शान्तिः शान्तिः

That is Complete-Wholeness, This is Complete-Wholeness,
This Complete-Wholeness arose from That Complete-Wholeness,
When Complete-Wholeness is removed from Complete-Wholeness,
What remains is Complete-Wholeness.


This is the first in a series of articles that aims to elaborate various aspects and nuances of end-to-end process integration.

What is process integration? Organisations – companies, business, groups, people – do things in definite patterns that achieve certain results that are important to them – and thus defines them. Banks do a bunch of things every day, every month, every year. So do hospitals, airlines, manufacturers, traders, shippers, governments, universities, insurers, clubs, retailers, and all other types of organisations.

These things that they do are their business processes. Businesses function in a repeatable manner owing to the various business processes that have been established by the operators of the businesses. Every department in these businesses could have hundreds or even thousands of business processes.

From the second half of the twentieth century, organisations have increasingly used computer based systems for performing these business processes. By the end of the twentieth century, these computer-based systems, or information technology (IT) systems had become the backbone of every bricks-and-mortar enterprise. Most companies were bricks-and-mortar in those days anyway; but everything that they did was based on IT systems.

I prefer terms like “data processing systems“, “control systems“, and “data processing and control systems” in place of the more fashionable term “IT system“. But I will try to stick to the term “IT system” in this series, in order to improve readability.

In the twenty-first century, we have seen the emergence of many enterprises that are primarily information technology based. Some of these enterprises offer services similar to bricks-and-mortar companies, but the difference is that they were built from the ground up (or massively transformed) to use IT systems as the primary driver to create the service that they were selling. Popular examples are — everyone knows this — Amazon, Uber, and Airbnb.

In contrast, now there are new enterprises using IT systems that produce/sell things that didn’t exist in the old bricks-and-mortar world. Examples are: Google, Facebook, Linkedin, Spotify, and many more. These companies rely on the Internet and Web-based technologies coupled with modern consumer devices such as laptops, tablets and smartphones in order to provide their services.

The invariant factor among all these various types and flavours of companies is their use of IT systems. These days even a person working solo is able to subscribe to Web-based services that can handle their schedules, accounts, communications, and marketing.

Thus the proliferation of IT systems in all types of businesses is complete.

These IT systems have to be programmed and configured to perform the various things that they are supposed to do. Due to complexities in businesses, many different software applications are needed by enterprises. Depending on the size and sophistication of these businesses, it may require several dozen, or hundreds, or sometimes over a thousand software elements to function within an organisation.

Process integration involves tools and techniques to create process flows that are meaningful composites of simpler flows, and achieve a definite and intended outcome. Often process integration requires composing process flows from multiple disparate software systems.

Keeping software applications loosely coupled enables separation of concerns and allows systems to operate without mutually choking things up. This paradigm makes it easier to operate and maintain IT systems. Unfortunately, the price of loose coupling is that on most occasions there is no coupling. Data processing applications in enterprises almost always resemble archipelagos.

But businesses and organisations require cooperation among their various components. There is a need to bridge many of the islands in the archipelago. Enterprise application integration (EAI) software and certain types of software architectures (pipes, shared memory, etc) facilitate the construction of these inter-island bridges. Things are thus held in place for some time.

In the figure above, the boxes labelled ‘A’, ‘B’, ‘C’, etc., depict software applications deployed in an organisation. Some of these applications may have been built to be able to connect with others, such as A-B, and D-E. Whereas, there may be software applications such as C and F that are not connected with any other application.

There are two distinct categories of business process flow execution, and these can be characterised by a simple metaphor.

Metaphors for characterising business process flow execution styles

Business processes that resemble the fork pattern have a process “definition and control” component that directs and orchestrates the various steps of the process. This resembles a fork pattern, with the fork handle being the controller and each prong of the fork being actions performed in a distinct software application in the environment. In this paradigm, each component application (each prong) is independent and unaware of the other prong. It is the job of the process controller mechanism to coordinate and trigger the individual software application’s functionalities to achieve its desired outcome.

Business processes that resemble the staircase pattern rely on every participating component being able to send and receive control and data messages to its peer in the process chain.

View of ‘fork’ and ‘staircase’ model process flows

A characteristic of the fork model is the existence of a process controller software, labelled G in the above figure, which controls one or more process flows involving the applications A, B, D, and E. In the logical view, G assumes a “superior” position as a controller. However from a deployment perspective, G is simply just another software application that has the ability to connect to the systems that it controls.

In the staircase scheme, there is no distinct process controller. The participating applications send data and control messages in an orderly manner as part of a process flow. Apart from performing their own specific functionalities, these applications somehow “know” how to cooperate in order to serve a higher level process flow. Thus at some point (when the process is triggered) application A will send an appropriate message to B, which does some actions and later relays its outcome to D and so on.

In real life scenarios, there can be very complex hybrid models of fork and staircase type process flows. That is, a single process may involve certain segments that flow like a staircase and some segments that are performed by a controller using a fork pattern.

As businesses evolve, many changes occur in their styles of functioning. The corresponding IT systems also need to evolve in order to support these changes. Each island thus makes changes to their ways of functioning – some islands dissolve and new islands are formed. Previously interconnected islands start drifting apart and the connecting bridges become no longer tenable.

Evolution and transformation is intrinsic to everything in this universe.

At any given point in time, however, IT systems trail or lag behind the business’ preferred ways of functioning. That is, through the passage of time, businesses decide to change their operations either to accommodate or to take advantage of prevailing conditions, as the case may be. Since everything that businesses do depends on data and rules created in IT systems, these rules and logic need to be updated (or created) to implement the changes in business operations.

Examples? Plenty: banks change rules about account verifications, create new types of accounts, offer new kind of loans, change the rules relating to loan approval; telcos offer new types of products, and services with different kinds of rules and incentives; airlines evolve their ticketing mechanisms to allow for granular pricing including seat selection, meal selection, queue priority, luggage allowance – the list is infinite.

Change is the only constant and businesses have to continuously alter their IT systems to reflect such changes. This comes at a price. Changes to IT systems take time and money. Sometimes the business changes are transitory and may not even exist by the time the IT systems are changed.

So, what do they do? They prioritise and commission the changes that are most important and likely to remain in force for longer periods.

There are broadly two categories of changes. The first type is one that requires changes to one software group in the organisation. A cost-benefit analysis based decision is made and if favourable, a project is commissioned to implement the changes.

The other type of change is one in which multiple software applications need to be bridged using additional software to allow for flow of data from one system to other systems, thereby triggering actions on those systems.

Let us use a typical scenario in a telecommunications (telco) company in order to illustrate the situation.

In a telco, client details from a customer relationship management system (CRM) will need to flow into a provisioning system for mobile services to activate a specific type of service. A billing system would then read the usage-records of the service and compute the bill for the user. Usually CRMs, service provisioning systems, and billing systems are standalone and don’t communicate with each other. But in a telco, you will need to get these systems to interact with each other. Otherwise, the process of making changes to customers’ services will require multiple distinct steps: modify the CRM to take note of the new subscription, modify the service’s software to enable the feature for the subscriber, and modify the billing system to take note of the billing plan under which the usage of the service is to be calculated.

Now, over time the telco may acquire new hardware – 4G to 5G perhaps – and may craft different subscription plans – such as unused data sharing, or differentiated pricing of data depending on the type of content downloaded. Such plans and their intricate features keep changing year after year. The CRM that holds customer details may not change as rapidly over time as the service systems that provide the various telco services.

A business process that covers the entire cycle of customer subscription provisioning in a telco, therefore needs to encompass updates to the CRM, the service management system, and the billing system. Each of these systems would have been built and customised by different vendors, and they may be based on very different software and hardware technologies. So how is this business process accomplished in real life?

In this example, the end-to-end business process is broken down into components pertaining to updates on the CRM, the service provisioning system, and the billing system. Then, the interoperability between the applications is checked. “Interoperability” between two or more systems is the ability to share data and control messages between the applications via software. For example, it is the software interoperability between an Amazon Alexa device and a robotic vacuum cleaner that allows you to speak into the Alexa device and say “… clean the living room”, which causes the Alexa device to invoke a compatible API call on the robotic vacuum cleaner and pass on the instructions “clean” and “living room”.

If the participating software systems are interoperable, that is, there exist sufficient software connection techniques to easily connect the component process flows (CRM update, service provisioning update, billing update), then the best option is to utilise those features and create a composite flow by integrating the component pieces of the process.

If, on the other hand the participating systems are not interoperable, a cost benefit analysis of modifying the underlying systems to enable them to interoperate is determined. Based on this analysis there are two options. First, if the benefit outweighs the cost, a project can be commissioned to implement the change in process flow and everything is back to normal and high efficiency.

But, if the cost of making the changes cannot be justified by the benefits of having the process run automatically, we enter the dreadful world of software driven labour.

Software driven labour

The term “software driven labour” is used to describe the scenario where humans operate one or more software applications via the software’s user interface screens (UI screens) to perform steps of a business process that should ideally have been performed automatically through capabilities built into some workflow execution software or some of the component application software related to that process.

This situation is the root cause of hundreds of billions of dollars being spent annually on a cumulative basis by the world on what I call low fidelity process execution methods.

The problem with the decision to go with the low-fidelity approach is that the cost-benefit calculation is made at a micro-level for a specific changes in a small set of processes — which may in all fairness be the right decision; but the accumulation of dozens or hundreds of such micro-decisions to adopt a low-fidelity approach eventually creates a massive crater of inefficiency in organisations — big enough to be visible from the moon.

Low fidelity process execution methods include deploying armies of humans in low-wage geographies to perform manual actions that should rightfully be performed automatically by software.

Low fidelity methods also include currently popular technologies such as RPA and hyperautomation. I will discuss this in subsequent articles.

This problem needs to be addressed and fixed at a process integration level in order for organisations to achieve straight through processing (STP) of all of their business processes. The advantages of achieving across-the-board STP of business processes are unparalleled — there will be collective savings of hundreds of billions of dollars per year, and operational efficiency of organisations globally will increase by several orders of magnitude than what is prevalent today.

In order to understand process integration, we need to take a journey through what is known these days as “process automation”. The remaining articles in this Volume will trace this journey.

Leave a Reply