Series: Rethinking
A king complained that his messages took too long to reach the provinces. His minister gave every courier the fastest horse in the kingdom. The messages still arrived late. The roads between the towns had not been built.
Enterprise technology has followed the same pattern for twenty-five years — faster humans, smarter bots, better copilots — each generation making the individual courier faster while leaving the roads unbuilt. The integration layer, where systems actually exchange data, has not moved. And making each individual worker more productive does not fix what happens between the systems they operate.
In brief: Copilots and MCP-powered assistants are personal productivity tools — they make individual workers faster, but they do not automate enterprise operations. Even when the copilot sits with a business operations professional, it helps the person; it does not systematise the process. The same AI capabilities, relocated from the workaround layer to the integration layer, would do something different: connect the systems themselves, apply rules consistently across every case, and make the workaround unnecessary. This article describes that architecture and why this generation of AI is the first that could actually build it.
In 2002, the answer was a shared services centre in Manila — three hundred people at screens, moving data between systems that could not talk to each other. In 2012, the answer was RPA — software bots replicating what the people in Manila did, faster and without bathroom breaks. In 2018, the answer was cognitive RPA — the bots could now read documents, not just click buttons. In 2021, the answer was hyperautomation — orchestration layers coordinating fleets of bots across enterprise workflows. In 2025, the answer is Copilot — a large language model sitting beside the human worker, helping them navigate the same systems faster and with greater contextual awareness.
Five generations, five waves of technology investment — and one architectural constant: every one of them operates in the gap between enterprise systems, working around integration failures that remain untouched underneath. The technology changes; the layer does not.
This article is the constructive companion to “Rethinking AI for Automation: Where the Light Is,” which diagnosed the misdirection — AI overwhelmingly aimed at the workaround layer rather than the integration layer. Here, the question shifts from what is wrong to what would be right. The capabilities powering today’s copilots and agentic systems are extraordinary. The argument is not that these capabilities are wasted. It is that they are misplaced — and that relocating them to the integration layer would address a problem the enterprise world has been reorganising around, rather than solving, for twenty-five years.
I want to be precise about this distinction, because it is the hinge of everything that follows.
The Copilot Moment
To understand what copilots actually are — architecturally, not commercially — set aside the branding for a moment. Microsoft 365 Copilot, Salesforce Einstein Copilot, ServiceNow’s Now Assist, GitHub Copilot, and the growing ecosystem of industry-specific copilots all share the same structural pattern: a large language model positioned between the human worker and the application, augmenting the human’s interaction with the software interface. The LLM reads emails, summarises documents, drafts responses, retrieves knowledge from organisational repositories, and helps the worker move through complex systems faster.
The productivity gains are real. Research consistently shows that underwriters, claims adjusters, operations analysts, and knowledge workers across industries spend sixty to seventy percent of their time on administrative tasks — email management, document handling, data re-keying between systems, meeting notes. Copilots address this directly. An underwriter who receives a forty-page submission package and gets a clear summary in seconds makes a better-informed decision than one who skimmed it because they were rushed. A logistics coordinator who can ask a natural-language question about shipping regulations and get an immediate, contextualised answer is faster. These are not small things.
But consider the architecture. The copilot sits between the human and the application. It helps the human use the application more effectively. The human is still the engine — reading, deciding, navigating, entering data, and moving between systems. The copilot has made the human faster and better-informed. It has not made the human less necessary. The claims adjuster who previously spent twenty minutes reading a policy document now spends three — but she still sits at the same screen, still navigates the same systems, still bridges the same gaps between applications that cannot exchange data directly. The copilot is, architecturally, the latest occupant of Box B — the workaround layer that exists because the core enterprise systems in Box A cannot interoperate.
For those encountering this framework for the first time: Box A refers to the core enterprise software systems where real data processing happens — ERP, CRM, billing, compliance, policy administration, claims management. Box B is the workaround layer — humans, back offices, shared services centres, RPA bots, AI agents, and now copilots — that exists because Box A’s systems cannot talk to each other. The work that happens in Box B is software-driven labour: the manual or semi-automated operation of software user interfaces to bridge application integration gaps — to make App A communicate with App B when the two cannot exchange data directly. And here is a foundational principle that the industry has consistently overlooked: integration predicates automation. If two applications are not integrated — if data cannot flow between them as part of a business process — then no automated workflow involving both applications is possible. When that integration does not exist, humans fill the gap. They become the integration method: reading from one screen, typing into another. Swivel-chair integration. The human is not “automating” anything — they are the integration. Every generation of Box B technology, from BPOs to RPA to copilots, has been a different answer to the same question: who or what serves as the integration fabric between applications that cannot talk to each other?
The seduction of copilots is that they use frontier AI —OpenAI, Claude, Gemini — and this makes them feel like a paradigm shift. The technology is a paradigm shift, but the deployment is not. A copilot that helps a claims adjuster work a legacy claims management system faster is architecturally identical to an RPA bot that worked the same system in 2015. The capability has leapt forward, but the layer has not moved.
We have seen this exact ambition before. In 2018, UiPath’s Daniel Dines declared his vision for the RPA era: “a robot for every person” — consciously channelling Bill Gates’s “a computer on every desk.” It was a bold and commercially brilliant positioning. UiPath’s valuation soared from $1 billion to $3 billion in a single year; Dines landed on the cover of Forbes as “Boss of the Bots.” The vision was that every employee in every enterprise would have a software robot assisting them with their daily tasks — handling the repetitive work so the human could focus on the creative and strategic.
As a competitor who had built the previous generation of the same underlying technology — and had built it as integration infrastructure, not as a human-replacement tool — I remember the discomfort of watching this unfold. Not because the technology was wrong, but because the architectural placement guaranteed the outcome: the robot sat in Box B, operating application UIs as a substitute for a human, inheriting every fragility of the workaround layer. When the application changed its interface, the bot broke. When the process had an exception the rules did not cover, the bot stopped. When organisations tried to scale from a handful of pilot bots to the thousands that “a robot for every person” implied, the economics collapsed under maintenance burden and brittle integrations. Deloitte found that only three percent of organisations successfully scaled their RPA programmes. The vision was right in ambition but wrong in layer — and the layer made it undeliverable.
Today, the same vision is being repackaged. “A copilot for every employee” has replaced “a robot for every person.” The technology is vastly more capable, but the architectural layer has not changed.
The Agentic Moment
Agentic AI represents a step beyond copilots — and a more interesting architectural question. Where copilots assist humans, agentic systems replace them in executing multi-step workflows. An AI agent can log into a system, operate application interfaces, make contextual decisions, handle exceptions, coordinate across multiple applications, and complete end-to-end processes without a human touching a screen. The latest agentic frameworks can orchestrate dozens of tool calls, maintain state across complex workflows, and reason about business rules in ways that would have been science fiction five years ago. In a meaningful sense, this is real automation.
But the architectural question remains: where is the agent deployed? If the agent sits in Box B — operating applications through their user interfaces as a substitute for a human operator — it is a more capable workaround. It bridges the same gaps the human bridged. It moves data between the same systems that could not exchange data directly. It is faster, cheaper, more consistent, and more scalable than the human. But it does not address the reason the human was there in the first place. Same layer.
If the same agent sits in Box A — operating application UIs not as a human replacement but as an integration method, as part of the integration infrastructure — it is something different in kind. A component of the integration stack that can reach applications regardless of whether they expose APIs, adapt to interface changes, and mediate transactions intelligently between systems that were never designed to interoperate.
The technology is identical in both deployments — only the layer differs.
The Productivity Trap: When Personal Brilliance Meets Operational Obsolescence
There is a contemporary example that illustrates two distinctions at once — and the second one is the more important.
The Model Context Protocol — MCP — has emerged as an open standard for connecting large language models to external tools and data sources. It operates at two levels, and both are worth understanding.
At the individual level, MCP is a step-change. A developer who connects their LLM to their code repository, their documentation, their local file system can accomplish in minutes what previously took hours. A knowledge worker who wires up their AI assistant to their calendar, their email, their project management tool experiences a step-change in personal effectiveness — with little or no learning curve for each new application. I have derived immense benefit from MCP-LLM integration in my own work, and I do not say this lightly.
The architectural picture is different. MCP as a connectivity method has legitimate potential in Box A. It is, in principle, another way to connect applications without requiring the other side to expose a custom API — another means of removing the onus to comply. In that sense, it belongs in the composite integration stack alongside APIs, UI integration, and semantic mediation. The architecture is sound.
But here is where the confusion enters — and where Dines’s dream reappears in new clothing. When an organisation equips each employee with an MCP-powered copilot — connecting their personal AI assistant to the applications on their desktop — and calls this an automation strategy, they are repeating the twenty-five-year pattern. They have made each individual worker more effective within their own silo. They have not connected the enterprise’s systems to each other. The gap between applications remains. The long tail of integration remains. Box B becomes more comfortable; Box A remains unaddressed. Copilot plus MCP for an individual is “a robot for every person” with better technology. It is personal productivity, not business operations.
And the distinction holds even when the copilot user is not a back-office worker. An underwriter using a copilot to summarise a submission package faster — that is personal productivity, and it is fine for what it is. But the underwriting process — the accurate, consistent application of rules, treaties, reinsurance guidelines, and regulatory requirements across every case — is not a personal productivity problem. It is an enterprise operations problem. You do not want treaty compliance to depend on how well each individual underwriter’s AI assistant happens to interpret the guidelines on a given Tuesday. You want those rules applied systematically, through a formal enterprise system, the same way every time — and then, if the organisation chooses, a reduced number of underwriters exercising judgment on the cases that actually need human authority. The same applies in claims adjudication, loan origination, and trade compliance: anywhere a regulated process depends on consistent rule application across cases, personal productivity tools are not the answer. AI for personal productivity and AI for enterprise operational automation are different things aimed at different layers, and conflating them is how organisations end up optimising roles that the same technology is about to restructure.
And this is where the second, more consequential distinction must be surfaced — because the world of business operations has changed beneath our feet while we were busy making individuals more productive.
What copilot-plus-MCP does is help a human navigate their applications faster. What agentic AI has demonstrated is that a surprising fraction of the tasks that human was performing — the underwriting analysis, the claims adjudication, the loan assessment, and the compliance review — no longer require a human in the loop at all. Not all of them. But enough to reshape the economics. An underwriting team of eight becomes a team of two — not because the work has disappeared, but because the agentic system handles the data assembly, cross-referencing, pattern matching, and rule application across systems, surfacing every relevant frame of reference — local context, language translations, legal ramifications, claims history — so that the remaining two underwriters focus exclusively on the ambiguous cases that actually require human judgment. The copilot makes all eight underwriters thirty percent faster. The agentic system makes six of them unnecessary.
The trap is plain: organisations investing in personal productivity for roles that are being structurally compressed by the same generation of AI. The copilot helps each underwriter read the submission faster. The agentic system reads, analyses, cross-references, and recommends — and the team shrinks from eight to two. By the time the copilot deployment is complete, the workforce it was designed to augment may have been restructured around it. Use MCP-powered copilots for the work that remains irreducibly human. But don’t confuse personal productivity with enterprise operations.
The Missing Piece of the Integration Stack
To understand why this architectural distinction matters, consider a concept called the onus to comply — the question of who bears the burden of making integration work. Every conventional integration method places this burden on the component application. APIs require the application to expose a programmable interface. Middleware requires it to adopt a standard message format. EDI requires both trading partners to implement a shared transaction schema. Enterprise service buses require every connected system to publish and consume from a canonical data model.
And that is why the long tail of integration persists. The long tail — the vast number of small, infrequent integration cases that are individually too expensive to build but collectively enormous — exists because each conventional integration requires both sides to comply. And compliance costs money: development time, testing, version management, ongoing maintenance. For the headline integrations — SAP to Salesforce, core banking to payments gateway — the economics work. For the thousands of smaller integrations — the claims system to the compliance tool, the logistics platform to the customs portal, the broker submission to the underwriter’s risk system — the cost of compliance on both ends exceeds the value of any individual integration. So the gap remains. And Box B fills it.
Now consider what the technology that became RPA actually does. I was involved in its creation, so I can speak to the original intent with some precision. When it was originally conceived — as UI integration, before the “Robotic Process Automation” branding redirected it — the core capability was precisely this: connecting applications through their user interfaces without requiring the application to expose an API, adopt a standard, or even know it was being integrated. The impetus was a conversation with a telecom Group CIO who told us, in essence, that his dozens of enterprise systems each worked brilliantly on their own — but humans had to shuttle data between their screens. He did not want a new platform. He wanted the data to jump from one screen to another. The component application continues operating exactly as it always has. The integration happens through the application’s existing interface — the same interface a human would use.
Stay with me here, because this matters. UI integration is the only integration method that does not place the onus to comply primarily on the component application. No other integration approach has this property. APIs require the application to comply. Middleware requires the application to comply. EDI requires both sides to comply. UI integration requires neither side to comply. The integration layer bears the full burden. That is the point: the burden does not vanish — it centralises. The integrator must stay current as UIs, policies, and workflows evolve. But the trade is architectural consolidation: one integration layer managing change, rather than thousands of Box B operators each managing it independently at their desks.
When the industry rebranded this capability as “RPA” and deployed it in Box B — as a way to automate what human operators were doing — it misclassified a legitimate integration technology as a workaround tool.
I need to be direct about this, because I have watched this misclassification play out for twenty years. I built UI automation as an integration technology: a way to make applications communicate through their interfaces, enabling the automation of cross-application workflows as a natural consequence. Integration first; automation follows. The RPA framing inverted this — positioned the technology as automation first, obscured its integration foundation. The same capability that could have been a first-class citizen of the integration stack was instead positioned as a human-replacement tool, sitting in the gap between systems rather than bridging the gap architecturally.
The technology was right. The industry put it in the wrong layer.
Now pair that capability with what has happened in AI over the past three years. The agentic systems that can handle complex multi-step workflows. The LLMs that can understand transaction semantics, not just screen patterns. The adaptive AI that can handle interface changes — not blindly, but with increasing intelligence: understanding the semantics of what changed, increasingly capable of distinguishing a cosmetic layout adjustment from a policy-level change that reflects new regulatory requirements or altered business rules. When the change is cosmetic, the agent adapts. When the change is semantic — when a field has been renamed because the underlying concept changed, when a new required step has been added to prevent misuse, when a workflow has been altered for compliance reasons — the agent recognises the shift and escalates, either to higher-order agents with broader context or to humans with the authority to resolve it. This is not blind resilience; it is intelligent, tiered adaptation. When you promote the UI integration capability from Box B to Box A and infuse it with this kind of semantic understanding, you get something the integration world has never had: an integration method that can reach any application with a stable, permissioned user interface, adapt intelligently to interface evolution, understand what transactions mean, and orchestrate multi-step processes across systems that were never designed to interoperate — without requiring the other side to change.
This is not RPA with better AI bolted on. That has been tried — cognitive RPA, intelligent automation, hyperautomation — and each attempt kept the technology in Box B, adding capabilities to the workaround rather than promoting the capability to the integration layer. How agent hierarchies handle escalation across organisational boundaries — who has authority, how context is preserved — is a discussion for another article. But the architectural move itself is clear: take the core capability out of Box B entirely and deploy it as integration infrastructure in Box A.
What Stays Human — and What We Got Wrong About Knowledge Work
The boundary between “real human judgment” and “work that only looked like it needed a human” is shifting — fast. And the shift reveals something important about the architecture.
Consider underwriting analysis, claims adjudication, loan application assessment. For decades, these were classified as quintessential knowledge work — human experts reviewing dozens of data points across multiple systems, applying experience and judgment to determine risk, validate claims, or approve credit. The work was complex, required domain expertise, and involved interpreting ambiguous information. But agentic AI has demonstrated, convincingly, that a surprising fraction of the tasks within these roles was not judgment-dependent at all. It was data-intensive, multi-source, and structurally complex — but ultimately it was pattern recognition and policy application operating across systems that could not share data directly. When an AI agent can ingest a forty-page submission package, cross-reference it against policy terms in one system, claims history in another, regulatory requirements in a third, and surface a recommendation with every relevant frame of reference laid out — local context, language nuances, legal ramifications, historical patterns — what has been revealed is not that AI has replicated human judgment but that a large portion of what we bundled under “judgment” was data assembly distributed across disconnected systems.
I have been circling this point — let me land it. A surprising fraction of what the enterprise classified as “knowledge work” was actually Box B work at a higher altitude. The human was not always exercising irreplaceable judgment — in many of their tasks, they were serving as the only available integration fabric, the one entity capable of pulling data from multiple systems, synthesising it, and producing a coherent output. The agentic AI, by performing these tasks successfully, has not replaced human judgment. It has revealed which parts of the work were integration masquerading as judgment — and which parts remain human. The underwriting team does not disappear; it compresses. Eight become two. The two who remain do better work — because the system has assembled every relevant input, and they can focus entirely on the ambiguous cases that require genuine expertise, contextual intelligence, and accountability.
A regulatory-mandated approval still requires a human sign-off. A compliance officer interpreting ambiguous regulation still applies contextual intelligence that requires years of experience. The counterparty relationship that requires a phone call, the exception that falls outside every category, and the ethical dimension that no model can adjudicate — these remain human. But the domain of the irreducibly human has narrowed considerably. And the work that has been removed from it was, architecturally, integration work all along.
The distinction remains precise: if the only reason a person was at a screen was to synthesise data across systems that could not share it directly — whether copying fields between windows or performing “expert analysis” that was really cross-system data assembly — that was an integration problem wearing a knowledge-worker costume. If the person is there to make a judgment not derivable from available data, that is a business process. It stays.
Even in regulated or high-risk settings, the principle holds. The workflow should still be automated end-to-end — the only question is whether the system can commit autonomously or must pause at a defined human approval gate. Humans should be gates, not glue. The gate is a deliberate architectural choice: a point in the process where human authority is required. The glue — the manual shuttling of data between systems, the re-keying, the cross-referencing — is an integration failure that has been institutionalised as a role.
AI in Box A eliminates the first category. The agentic revolution, by successfully automating what was previously considered irreducibly human, has dramatically expanded that first category. The human workforce does not disappear — but it is redirected, more sharply than most organisations have yet recognised, from data assembly to genuine judgment, relationship management, accountability, and innovation. The irony of deploying AI in Box B to make individual knowledge workers faster is that organisations are keeping their most expensive people occupied with work that AI has already proven can be done at the system level — and leaving less capacity for the human decisions that actually determine business outcomes.
The deeper realisation is this: the knowledge worker as a cog in the enterprise machine — the human who sits between systems, synthesising data, applying rules, and producing outputs — is a role that was never meant to exist permanently. It emerged because Box B needed a fabric to hold it together, and for several decades, humans were the only fabric available. They became structural components of a workaround architecture that grew so large and so familiar that it began to look like the business itself.
Now the tools and technologies have arrived that can replace that fabric entirely — shifting business processes from Box B’s human-mediated workarounds to fully automated flows with human oversight only where genuine judgment is required. The question is whether organisations will recognise this structural shift, or continue investing in making the cogs spin faster.
The Composite Integration Stack
The constructive vision is not a wholesale replacement of existing integration infrastructure. It is a completion of it.
Box A today contains conventional integration methods — APIs, middleware, EDI — that work well for headline cases: the high-value system pairs, the stable bilateral exchanges where both sides have invested in programmable interfaces. What Box A has never contained is an integration method for the long tail — the thousands of applications that do not expose APIs, the legacy systems, the third-party portals, the cross-organisational interfaces where you cannot modify the other side. This is the gap that Box B fills, generation after generation, with humans, then bots, then copilots.
The division of labour is determined by the use case, not ideology. Any integration that demands ACID transactions, high-volume throughput, or deterministic guarantees belongs on the left side of the long-tail curve — and the economics of conventional integration already justify building it there. If a use case needed those properties, it would not be in the long tail in the first place. The long tail is the long tail precisely because its use cases lack those characteristics — because the current “transactional guarantee” is a human in a shared services centre checking their own re-keying. Rather than leaving those cases in Box B or dressing them up with copilots, you move them back to Box A as intelligent UI integration.
The composite integration stack completes Box A by adding the missing layers: conventional integration for the headline cases and any use case requiring strict transactional guarantees (this stays); intelligent UI integration — the agentic capability promoted from Box B, powered by AI — for the long tail, reaching applications through their existing user interfaces and adapting intelligently when those interfaces evolve; and semantic mediation — LLM-powered understanding of what each system means — operating across all integration methods, translating between applications in real time rather than requiring every system to adopt a canonical format. The operating principle across the stack: use AI for intent and interpretation — understanding what a transaction means, what each system expects, how to navigate ambiguity — and execute via typed, policy-checked actions with full auditability.
These are complementary layers of a single integration architecture. APIs where they exist. Intelligent UI integration where they don’t. Semantic AI across all of it. The “missing app in Box A” — the promoted agentic capability, infused with semantic intelligence — fills the gap that conventional integration has left open for thirty years. It makes the long tail of integration economically solvable for the first time (for a deeper exploration of why the long tail persists and the $400 billion workaround economy it sustains, see “The $400 Billion Workaround”).
The 25-Year Pattern — and Why This Moment Is Different
Return to the timeline from the opening — shared services centres, RPA bots, cognitive automation, hyperautomation, copilots. Five generations of technology, each more capable than the last, each deployed in the same architectural layer. I have watched this cycle repeat for my entire career. The pattern persists not because the technology is inadequate but because the architectural assumption goes unexamined: that the right place to deploy automation is in the gap between systems, making the workaround faster, rather than in the integration layer, making the workaround unnecessary.
Every generation was presented as the one that would change everything, and every one delivered real productivity gains within Box B — while leaving Box A’s integration gaps untouched, slightly more obscured by the machinery that had grown up around them. The digital transformation programmes that were supposed to address these gaps have their own structural limitations — they tend toward centripetal consolidation at headquarters while the real integration failures live at the periphery, in the B2B exchanges, the local operations, the cross-organisational workflows that no central programme can reach (see “Rethinking Digital Transformation: The Centripetal Fallacy”).
But the current generation — copilots, agentic AI, generative AI — is different in one crucial respect. For the first time, the technology is powerful enough to operate at the integration layer itself. Previous generations could mimic human actions but not understand them. They could click buttons but not interpret transactions. The AI capabilities available today — semantic understanding, multi-step reasoning, adaptive operation, and contextual decision-making — are precisely what intelligent integration requires.
The question is no longer whether the technology can do it. The question is whether organisations will deploy it where it matters — in the integration layer, where it addresses the root cause — or continue the twenty-five-year pattern of making the workaround more impressive while the underlying problem compounds.
The copilot can read the document. The agent can work the screen. But the data was digital before it became a document, and the screens exist because the systems behind them cannot talk to each other. The integration layer — the plumbing that would make the workaround unnecessary — is still waiting.
This article is the constructive companion to “Rethinking AI for Automation: Where the Light Is,” which diagnoses the misdirection of AI toward the workaround layer. For the economic and social consequences of the architectural shift described here — what happens to markets, workforces, and competitive structures when Box A is fixed — see “Rethinking AI for Automation: The Real Redistribution.” For what the composite integration stack looks like in practice — a transaction flowing through an intelligent integration layer — see “Rethinking the Transaction.”
Madhav Sivadas is an enterprise software integration architect with nearly thirty years in process integration, UI automation, and enterprise workflow. He founded Inventys (acquired 2012), holds multiple US patents in software integration, and is the founder and CEO of Telligro, building AI-driven intelligent transaction networks for insurance, logistics, and financial intermediaries.
madhavsivadas.com
