Aditya Roy/AI-Generated Image
Summary: AI is not transforming every part of business equally. It thrives in structured, verifiable domains like coding, but struggles in ambiguous, customer-facing roles where errors are costly and public. This article examines how the AI revolution is unfolding at very different speeds across industries – and what that means for businesses and investors.
Two stories from the past few weeks capture something essential about where we are with AI. Consider them together.
The first concerns Salesforce, the enterprise software giant that has been among the most aggressive adopters of AI in customer-facing operations. About a year ago, CEO Marc Benioff announced that AI agent deployment had enabled the company to reduce its support staff from 9,000 to approximately 5,000. The future had arrived. Then reality intervened. Reports from late 2025 and early 2026 indicate that the company is now reducing its reliance on AI due to a comprehensive failure. The AI agents displayed what internal reports called "high variance in responses", corporate speak for confidently giving wrong answers. They suffered from "instruction dropping," in which, for sequences longer than eight steps, the models would omit some steps. They exhibited "drift," losing focus on their primary tasks when users asked unexpected questions. Major customers complained that AI-driven support took longer to resolve issues than the old search function it replaced. Salesforce is now pivoting to what it calls "deterministic automation", essentially, going back to rigid, rule-based scripting. The company that fired thousands of people to embrace AI is now admitting, in corporate language that barely disguises the embarrassment, that they were "more confident" than the technology warranted.
Suggested read: The million bug reports that AI can’t match
The second story is harder to pin to a single headline because it's a zeitgeist shift. Over the past couple of months, the conversation around AI and software development has transformed completely. People who were sceptical six months ago – senior developers, technical leads, people who actually write code for a living – are now saying, with varying degrees of alarm or excitement, that the age of human beings writing code is coming to an end. Not in some distant future, but imminently. The tools have crossed a threshold. What was a helpful assistant has become something closer to an autonomous colleague. Entire features are being shipped by AI with minimal human intervention. The productivity gains are no longer incremental; they're structural.
Suggested read: Nothing new…
How can both of these things be true simultaneously? How can AI be failing so comprehensively in customer service – a domain that appears, on paper, to be relatively straightforward – while revolutionising software development, which seems far more complex?
The answer, I believe, is that we've been thinking about AI all wrong. We treat it as a single phenomenon, a unified force that will sweep through the economy at roughly the same pace. However, AI in business is not a single story. It's many parallel stories, moving at wildly different speeds. Some domains have achieved genuine escape velocity. Others remain stuck in what the industry calls "pilot purgatory." And the distinction has almost nothing to do with how intelligent the AI is.
Suggested read: Let’s be boring
I should note that this is not a theoretical observation. Over the past couple of years, I've been managing Value Research's AI adoption across different parts of the business – software development, content (both written and graphical), internal tools and, most of all, our website and public web-based tools. The differential velocity of AI across these domains is something I've discovered first-hand, fighting in the trenches and getting wounded, so to speak. The framework I'm about to describe emerges from that experience, not from reading research papers.
I've written about AI extensively over the past couple of years, trying to navigate between breathless enthusiasm and reflexive dismissal. In late 2024, I argued that "this time it's different", that, unlike the EV transition, where investors could simply avoid automotive stocks until the dust settled, "AI's pervasive nature means there's no practical way to sit this revolution out." I stand by that. A year later, I noted that "the fact that a revolution is real doesn't mean that every business claiming to be part of it will succeed, and it certainly doesn't mean that every product and service needs to be AI’ed immediately." I stand by that, too. More recently, I observed that "the gap between what AI demos well in controlled environments and what it actually delivers when confronting the messy real world remains enormous."
Suggested read: Random patterns and equity investing
But I now think there's a more precise way to understand this gap. It's not random. It's structural. AI thrives in some worlds and struggles in others, and the determining factors are not what most people assume.
Consider what makes coding such fertile ground for AI. Code has formal structure – syntax, testability and deterministic outcomes. Outputs are machine-verifiable: the code runs and passes tests, or it doesn't. The feedback loop is immediate. When an AI makes a mistake, tests fail, a developer (or increasingly, the AI itself) notices, fixes it and moves on. Errors are private and reversible. A bug is costly but correctable. And crucially, the work decomposes cleanly into discrete units that can be evaluated independently.
There's an important caveat here. Writing code is perhaps 10 to 20 per cent of what creating a successful software product actually takes. The rest is understanding what needs to be built, why it needs to be built, how it fits into existing systems, what happens when it breaks and institutional knowledge about edge cases and customer behaviour. As I noted in a recent column, perhaps 10 per cent of the total business value of even a tech-centric business comes from the bare code itself. AI has transformed the 10-20 per cent dramatically. Whether it can touch the remaining 80-90 per cent is a very different question – and that's where the Salesforce debacle becomes instructive."
Suggested read: This time really is different
Now consider customer service, where Salesforce and many others have come to grief. On paper, this should be easier. The queries are repetitive, there are massive historical datasets, and intent classification seems straightforward. But in practice, the domain is a minefield. Customers don't speak in data schemas, and emotion, sarcasm and cultural context matter enormously. One wrong answer can escalate to social media outrage, regulatory complaints, or legal action because the failures are public. Who owns a hallucination? When the AI says something incorrect, is that the company speaking? And the edge cases that constitute perhaps 5 per cent of queries consume 50 per cent of the operational pain – precisely the situations where humans excel and scripts fail.
The difference isn't about intelligence but about what I'd call error economics. AI thrives where mistakes are cheap, private and correctable. It struggles where mistakes are expensive, public and permanent. Verifiability, reversibility, iterability – these are the actual determinants of whether AI can move from impressive demo to actual deployment.
This framework suggests that we can map the business world into fast, medium, and slow worlds for AI adoption. The fast worlds include coding, data transformation, internal analytics and infrastructure automation. The medium worlds include marketing copy, sales enablement and internal knowledge search. The slow worlds include customer-facing communication, HR decisions and compliance-heavy workflows. These worlds coexist within the same company, creating uneven productivity gains, organisational tension and what may be the most dangerous thing of all: false expectations at the top.
Executives misread this because businesses classify AI by function – "customer service AI," "coding AI" – instead of by the structural characteristics that actually determine success. Coding worked out well, not because it was prioritised or better funded, but because it was structurally compatible with systems that will sometimes be wrong.
Suggested read: Investing, fast and slow
We got a perfect illustration of this executive disconnect just today. During Bajaj Finance's Q3 conference call, CEO Rajeev Jain proudly announced: "AI listened to 2 crore calls, converted voice to text, and gave us data. Text-to-data conversion happened for 5.2 lakh customers. As a result, we generated 100,000 new offers for which we did not have information earlier." He added that loan disbursements through AI-powered call centres amounted to approximately Rs 1,600 crore, roughly 10 per cent of the quarter's total disbursements. "This capability did not exist in Q1 and Q2. It just got deployed. We'll be able to listen to 100 million calls next year," Jain said.
The response on X was, predictably, hilarious. As the entire country, except apparently Mr Jain knows, Bajaj Finance's incessant and mindless spam calls are the butt of countless jokes and memes. Here was a CEO proudly announcing that AI would help them generate even more offers from even more calls – using sophisticated technology to optimise something that customers actively despise. It's a masterclass in missing the point: AI being deployed to scale up the very activity that damages the brand. The machine is learning perfectly; it's the human learning that's absent.
For Indian investors, this framework is immediately relevant when evaluating our IT services giants – TCS, Infosys, Wipro, HCL Tech and Cognizant. These companies present a paradox: they perform work in the fast world (coding, software development, data transformation) but sell it as a service. They're structurally caught between two worlds.
The business model problem is this: these companies largely bill on a time-and-materials or headcount basis. If AI makes a developer twice as productive, the client faces a choice: pay for twice the output, or halve the team. The incentives are misaligned. The IT services company sells effort; the client wants outcomes. AI productivity gains accrue to the buyer of services, not necessarily the seller. This is fundamentally different from a product company, where internal productivity gains flow directly to margins.
Suggested read: The pigeon in every investor
Moreover, much of what these companies do isn't pure greenfield coding – it's maintenance, legacy system management, client-specific customisation, handling change requests and understanding what the client actually wants versus what they stated in the requirements document. This service wrapper around technical work involves exactly the ambiguity and human judgment that characterise the slow worlds. Understanding a client's business context, navigating internal politics, managing expectations when something goes wrong--these are deeply human skills. The coding might be automatable; the relationship management less so.
The companies aren't identically positioned. TCS, with its massive scale and diversification, is heavily tied to the traditional headcount model. Infosys has tried harder to position itself more around platforms and products, potentially better placed to sell AI implementation services rather than merely provide AI-augmented labour. Cognizant, very US-focused and heavily services-oriented, is arguably most exposed to the uncomfortable question: if AI makes developers productive, why do I need as many Cognizant developers?
The bull case is that these companies possess extensive domain knowledge, decades of client relationships and an understanding of legacy systems that no AI can match. As I wrote recently, quoting David Sacks: "Think about how many bug reports on Salesforce's code base over the last 25 years. Maybe millions of them. That system has been tested across thousands of large customers and enterprises." Indian IT services companies have precisely this kind of institutional memory about their clients' systems. They could become the implementers of AI transformation – selling consulting and integration services around AI rather than competing against it.
Suggested read: The thinking investor’s advantage
The bear case is that the fundamental arbitrage – smart Indian talent at lower cost doing work for Western clients – is squeezed from both ends. AI reduces the amount of human effort required, and the remaining work may involve higher-judgment tasks for which cost arbitrage matters less.
The investor question is: are you buying a coding company in which AI is additive and productivity gains are captured, or a services company in which AI productivity flows to clients and margin pressure increases? The answer is probably "both, uncomfortably", which is exactly why this requires careful thought rather than blanket optimism or pessimism.
What does all this mean for the sensible investor trying to navigate the AI landscape?
First, when you hear "AI" attached to a business function, ask the most important question: what happens when it's wrong? If the answer involves customers, regulators, or reputations, progress will be far slower than the executive-level PPT claims. If the answer is "it gets noticed and fixed quietly," that's a different world entirely.
Second, be deeply sceptical of companies claiming AI has transformed their customer-facing operations. Salesforce is not an outlier--it's a warning. The structural incompatibility between probabilistic AI systems and human beings who demand certainty and accountability remains unresolved. As I wrote earlier this month: "I'll believe AI is replacing customer service jobs when I have one satisfactory interaction with an AI support agent. Just one."
Third, recognise that the market currently prices AI as a single phenomenon when it's actually many parallel experiments, some succeeding and some stalling. The winners won't be companies that "use AI" but companies that understand where AI can structurally work--and just as importantly, where it cannot.
Fourth, for Indian IT services specifically, the next few years will reveal whether these companies can transition from selling effort to selling outcomes. Those who manage this shift may thrive in an AI-enabled world. Those that remain wedded to the headcount model may find themselves on the wrong side of their clients' productivity gains.
The story of AI in business is not one of universal acceleration. It is one of the selective escape velocities. Coding has left the atmosphere and entered orbit. Customer service is still fighting gravity. Most other functions lie somewhere in between--mistakenly assumed to be closer to the rocket than they really are. For investors, the crucial skill is not predicting whether AI will be transformative; it will be, but understanding which worlds will transform quickly and which will remain stubbornly earthbound is crucial. The many worlds of AI are not converging. They're diverging. And that divergence will determine which investments succeed and which disappoint.
Also read: Flawed mental models






