Counter to Palki Sharma's "The AI Bubble: Why 95% of GenAI Projects Are Failing | Vantage with Palki Sharma | N18G"
Palki's latest Youtube Clip: https://www.youtube.com/watch?v=Zsh6VgcYCdI
I asked Perplexity to counter this:
Here are 10 counterpoints to the arguments presented in the video “The AI Bubble: Why 95% of GenAI Projects Are Failing” by Palki Sharma on Firstpost. Each point challenges the narrative that generative AI (GenAI) is mostly failing or that we are heading toward an “AI bubble” collapse:
-
Widespread Productivity Gains
While 95% of pilots may not directly deliver measurable revenue in the short term, GenAI is already driving widespread productivity gains—especially in code generation, content creation, document search, and customer support. Companies using AI are streamlining workflows, reducing error rates, and accelerating turnaround times in daily operations. -
Adoption Curve Mirrors Past Technologies
Historically, any transformative technology (e.g., electricity, the internet) saw early, unfocused experimentation and a high failure rate. The percentage of short-term “failures” is not unique to AI and doesn’t signal a bubble, but rather the normal adoption curve towards finding product-market fit. -
Enterprise Integration Is a Long Game
Large organizations are risk-averse, and integrating AI into legacy systems takes months or years, not weeks. Many “pilots” are learning exercises, not failures—companies are building institutional capability and understanding before scaling up. -
Definition of “Meaningful Revenue” Is Narrow
The “meaningful revenue” criterion used in the MIT study and in the video is overly narrow. AI projects often first reduce costs, improve retention, and optimize processes. These benefits are not always directly reflected as revenue, yet have tangible business impact. -
Specialized Tools vs. “Build Your Own” Isn’t a Fair Binary
While specialized AI tools currently have a higher success rate, many organizations want to build custom solutions for unique needs and data. Early failures are the price of ownership, not waste. Over time, custom AI models may yield outsized competitive advantages once properly deployed and maintained. -
Sales and Marketing Use Cases Are Foundational
Derided “sales gimmicks” such as AI writing emails or generating pitch decks often serve as foundational experiments. They foster AI literacy, employee engagement, and innovation pathways that later evolve into deeper enterprise use cases. -
Long-Term ROI Comes from Domain Learning
The real ROI for AI often arrives after a period of domain-specific learning—classifying internal documents, automating compliance, or optimizing supply chains. Pilot failures today may lead to major business transformation in a few years. -
AI Hype Draws Investment and Accelerates Development
While “hype cycles” can overinflate expectations, hype also drives necessary capital and talent toward the field. This accelerates research breakthroughs, infrastructure, and open-source tools—ultimately benefiting the market. -
OpenAI and Big Tech: Strategic Not Just for Profit
The suggestion that OpenAI not being profitable proves a bubble overlooks the fact that big tech companies often operate at a loss or break even in early years to achieve dominant market share (see Amazon’s early decades). Current deficit spending funds growth, standards, and user adoption for AI. -
Early Failures Do Not Predict the Fate of Industry Giants
Most early internet companies failed, but foundational infrastructure like TCP/IP, databases, and browsers persisted. Likewise, even if many GenAI companies fail, the underlying models, techniques, and infrastructure will persist, fueling new entrants and use cases beyond today’s visibility.
These points suggest that while the AI “bubble” narrative may be relevant for investors seeking quick returns, the broader story is one of gradual adoption, learning, and long-term transformation rather than simple hype and bust.
Comments
Post a Comment