• 1. The Process Integration Trilogy: Software-driven Labour

    Reading Time: 9 minutes

    This is the first in a series of articles that aims to elaborate various aspects and nuances of end-to-end process integration.

    What is process integration? Organisations – companies, business, groups, people – do things in definite patterns that achieve certain results that are important to them – and thus defines them. Banks do a bunch of things every day, every month, every year. So do hospitals, airlines, manufacturers, traders, shippers, governments, universities, insurers, clubs, retailers, and all other types of organisations.

    These things that they do are their business processes. Businesses function in a repeatable manner owing to the various business processes that have been established by the operators of the business. Every department in these businesses could have hundreds or even thousands of business processes.

    Continue reading …

  • 2. Motivation and bias

    Reading Time: 4 minutes

    In this post I will enumerate and lay bare all my biases relating to process automation and outline my motivation for creating this series about process integration.

    Through the various posts in this series, I will attempt to explain and logically argue my motivations and biases. This post, therefore, is only an index.

    I use the word “bias” to refer to negative opinions that I have about certain things, and I use the word “motivation” to refer to the positive aspects that I intend to promote.

    I agree that sometimes opinions can be subjective and relative – my perceived negatives could be someone else’s positives and vice-versa.

    Continue reading …

  • 3. Structure and tone

    Reading Time: 3 minutes

    Volume 1 deals mostly with things that have been happening in the automation space from the beginning up until now. So, it deals with the past, leading up to the present. I will intermix many stories, incidents, and experiences from my professional journey. I will describe my learnings and conclusions, and you will then know why I have the biases that I mentioned in the previous post.

    All the articles that have been posted on this site can be considered to belong under the “Volume 1” category. I have purposely omitted the volume prefix for the moment in order to keep the article titles brief and uncluttered.

    Continue reading …

  • 4. Why Now?

    Reading Time: 3 minutes

    About half of the material of these articles in Volume 1 was planned and written in 2017, after I concluded my association with the acquirers of my company, Inventys.

    Why then, did it take me five years to release this content? Is it relevant any more? With the kind of momentum towards RPA and hyperautomation, created by second generation UI Automation companies and their billion dollar spending power, is it even worthwhile trying to change the direction of flow? Knowing all too well that the “early majority” and “late majority” of technology adopters (as described in Geoffrey Moore’s Technology Adoption Curve) are run by risk-averse, herd-mentality driven, pompous technology leaders, would it make any difference if I presented my opinion based on sixteen years of work in this integration and automation space?

    Continue reading …

  • 5. Origin

    Reading Time: 4 minutes

    An Automaton is a machine that performs a range of actions based on pre-configured instructions. Throughout history there have been various recorded descriptions of automata. Jacquard’s loom was a landmark automaton that helped to automate the process of weaving cloth. Electro-mechanical calculating devices created during the nineteenth century provided momentum for even more complex calculating devices that emerged in the first part of the twentieth century. Automata Theory provides the theoretical foundation for computing. von Neumann’s stored program computer (c 1941) is the basic reference model for all modern computers. Thus began the computing era.

    Continue reading …

  • 6. Evolution

    Reading Time: 10 minutes

    Software driven labour productivity (or lack thereof) has been under the lens for a very long time. Through the various generations of software technologies, there have been consistent attempts to provide relief to human fingers as they trudge wearily over keyboards.

    The simplest form of relief started with the concept of macros. These are sequences of actions that are pre-configured and later triggered when required. The facility to define macros is usually provided by the user interface technology used by the application software. Each application software tool provides its own macro facility. Platforms such as Microsoft and Apple have generic macro definition and execution layers that can span user interfaces (UI) built natively on their respective platforms.

    Continue reading …

  • 7. Adoption

    Reading Time: 19 minutes

    In the previous article I talked about the evolution of user interface integration technology. In the early days we (the creators of Inventys Fusion) classified this technology as “enterprise mashup”.

    Customer care contact centers were popular targets of this “mashup” technology, since their agents often had to grapple with over a dozen legacy applications in order to service callers’ requests. We then moved to back-office use cases — and there were plenty of these — in banks, insurance companies, airlines, telcos, etc. Back-office use cases did not often require “single-view” mashup screens. These use cases were mostly workflow related — automating the typical software driven labour that I have talked about in previous articles.

    Continue reading …

  • 8. Disillusion

    Reading Time: 8 minutes

    In previous articles, I traced the origin of screen-integration technology, the evolution to its modern form, and its morphing into UI Automation with the primary intent to automate business processes that were subject to the inefficiencies of software-driven-labour.

    The product-market-fit for this technology had existed for a very long time; but buyer behaviour, which I correlate with human IQ, yet again proved Geoffrey Moore’s technology adoption lifecycle theory. Tired of the notoriously dogmatic and slow buying behaviour of IT departments in large enterprises, we had moved our focus on the people who actually wanted to get things done— the business and operations people. This target group needed a non-technical basis for relating to the product.

    Continue reading …

  • 9. Resuscitation

    Reading Time: 13 minutes

    “RPA is dead (or dying)” was the call made by several technology analysts and research firms for a period of time.

    But it was too late. Customer organisations had changed: new teams and departments had been set up. That cringeworthy term “Robotics” appeared on many name cards and linkedin profiles. More importantly, major US based VCs had already put in big money into these startups.

    Something had to be done to address the two main issues:

    Continue reading …

  • 10. Think, McFly! Think!

    Reading Time: 15 minutes

    Let there be no doubt that what is called hyperautomation is singularly and principally founded on RPA. If RPA had not become well known there would not have been any definition of hyperautomation.

    Call it by any name, and augment it with any ancillary technology, but RPA and hyperautomation in the hands of business operations teams will always remain a technology that promises to reduce the volume of human interventions; and it will never cause complete independence from software driven labour in enterprises

    Continue reading …

  • 11. The Onus To Comply

    Reading Time: 12 minutes

    Enterprise software companies have earned hundreds of billions of dollars making and selling software integration products and services.

    Why are enterprises spending such amounts; and what are software integration technologies?

    In my first article in the Process Integration Trilogy, I explained the term “business process”, and discussed how software applications and systems were scattered in enterprises and thus resembled an archipelago. Further, the islands in the archipelago needed to communicate with each other in order to achieve certain higher level business outcomes (often termed as “complex business processes”). In the case of enterprise software systems, an archipelago of islands is the preferred architecture as opposed to a monolithic “vast mainland”. The reasons for this preference are intuitive and logical. Each modular software system performs a definite collection of use cases and is responsible for maintaining and owning certain collections of enterprise data. This makes it easier to develop and maintain each modular software system. A monolith would cause administrative and architectural logjam. Organisations evolve and various new capabilities are added periodically. A monolithic software model would make it impossible for companies to add and update various new products and capabilities into their organisation.

    In real-life, almost all organisations are based on a collection of hundreds or even thousands of modular heterogeneous software systems.

    Continue reading …