Skip to main content

Counter to Palki Sharma's "The AI Bubble: Why 95% of GenAI Projects Are Failing | Vantage with Palki Sharma | N18G"

Palki's latest Youtube Clip: https://www.youtube.com/watch?v=Zsh6VgcYCdI

I asked Perplexity to counter this:


Here are 10 counterpoints to the arguments presented in the video “The AI Bubble: Why 95% of GenAI Projects Are Failing” by Palki Sharma on Firstpost. Each point challenges the narrative that generative AI (GenAI) is mostly failing or that we are heading toward an “AI bubble” collapse:

  1. Widespread Productivity Gains
    While 95% of pilots may not directly deliver measurable revenue in the short term, GenAI is already driving widespread productivity gains—especially in code generation, content creation, document search, and customer support. Companies using AI are streamlining workflows, reducing error rates, and accelerating turnaround times in daily operations.

  2. Adoption Curve Mirrors Past Technologies
    Historically, any transformative technology (e.g., electricity, the internet) saw early, unfocused experimentation and a high failure rate. The percentage of short-term “failures” is not unique to AI and doesn’t signal a bubble, but rather the normal adoption curve towards finding product-market fit.

  3. Enterprise Integration Is a Long Game
    Large organizations are risk-averse, and integrating AI into legacy systems takes months or years, not weeks. Many “pilots” are learning exercises, not failures—companies are building institutional capability and understanding before scaling up.

  4. Definition of “Meaningful Revenue” Is Narrow
    The “meaningful revenue” criterion used in the MIT study and in the video is overly narrow. AI projects often first reduce costs, improve retention, and optimize processes. These benefits are not always directly reflected as revenue, yet have tangible business impact.

  5. Specialized Tools vs. “Build Your Own” Isn’t a Fair Binary
    While specialized AI tools currently have a higher success rate, many organizations want to build custom solutions for unique needs and data. Early failures are the price of ownership, not waste. Over time, custom AI models may yield outsized competitive advantages once properly deployed and maintained.

  6. Sales and Marketing Use Cases Are Foundational
    Derided “sales gimmicks” such as AI writing emails or generating pitch decks often serve as foundational experiments. They foster AI literacy, employee engagement, and innovation pathways that later evolve into deeper enterprise use cases.

  7. Long-Term ROI Comes from Domain Learning
    The real ROI for AI often arrives after a period of domain-specific learning—classifying internal documents, automating compliance, or optimizing supply chains. Pilot failures today may lead to major business transformation in a few years.

  8. AI Hype Draws Investment and Accelerates Development
    While “hype cycles” can overinflate expectations, hype also drives necessary capital and talent toward the field. This accelerates research breakthroughs, infrastructure, and open-source tools—ultimately benefiting the market.

  9. OpenAI and Big Tech: Strategic Not Just for Profit
    The suggestion that OpenAI not being profitable proves a bubble overlooks the fact that big tech companies often operate at a loss or break even in early years to achieve dominant market share (see Amazon’s early decades). Current deficit spending funds growth, standards, and user adoption for AI.

  10. Early Failures Do Not Predict the Fate of Industry Giants
    Most early internet companies failed, but foundational infrastructure like TCP/IP, databases, and browsers persisted. Likewise, even if many GenAI companies fail, the underlying models, techniques, and infrastructure will persist, fueling new entrants and use cases beyond today’s visibility.

These points suggest that while the AI “bubble” narrative may be relevant for investors seeking quick returns, the broader story is one of gradual adoption, learning, and long-term transformation rather than simple hype and bust.



Comments

Popular posts from this blog

NetScreen on Windows: Skit | Memories from 2006

[A skit written for our internal corporate event in 2006]   NetScreen on Windows: Skit    Story, Dialogue Mohan Krishnamurthy Starring: Rajesh  – An overly aggressive sales guy who believes every phone call is a golden opportunity to close a deal. Ramesh  – Rajesh’s faithful backend support, always on standby. His primary skill: Googling frantically. Mrs. Mumtaz Ali  – A practical housewife looking to buy net screens for her windows to keep out mosquitoes and houseflies. Mr. Ahmed  – Mumtaz’s husband, an average computer user who knows just enough about technology to be confused but not enough to escape Rajesh’s sales pitch. Setting: Pan-Emirates, the town’s go-to hardware shop, has its phone ringing nonstop. Rajesh’s direct number, 8915691, is often mistaken for the shop’s main line, 8915961. Typically, wrong numbers frustrate him—except today, when fate delivers an accidental lead that perfectly matches the product he sells. Time to strike! Act 1 – T...

Step-by-Step Tutorial to Create a 'Gem' with Google Gemini

 Creating a "Gem" under Google Gemini is a straightforward process that allows you to build a custom AI expert tailored to your specific needs. Here's a detailed, step-by-step tutorial on how to do it. A "Gem" is essentially a set of instructions that tells Gemini what role to play, what task to perform, and how to format its responses. Think of it as creating your own specialized version of Gemini. Step 1: Access the Gem Creation Interface Go to the Gemini web app at gemini.google.com . On the left-hand side, look for and click on Explore Gems . Click the New Gem button. Step 2: Name Your Gem The first thing you'll be prompted to do is give your new Gem a name. Choose a name that clearly reflects the Gem's purpose. For example, if you're building a Gem to help you write blog posts, you might name it "Blog Post Writer" or "Content Creator." Step 3: Write the Instructions This is the most crucial part of creating your Gem. The ins...

What are the new things happening on Internet similar to ChatGPT?

  There are a number of new developments in the field of natural language processing and machine learning that are similar to ChatGPT. Some examples include: OpenAI's GPT-3: GPT-3 is a more advanced version of GPT-2, it was released a few months after GPT-2, and it has been demonstrated to have even better performance on a number of natural language processing tasks. Google's BERT: BERT is a neural network-based model for natural language processing that has been trained on a large dataset of text and can be fine-tuned for a variety of natural language processing tasks, including sentiment analysis and question answering. Microsoft's Turing-NLG: It's similar to OpenAI's GPT-3, Turing-NLG is a text generation model that can be fine-tuned to perform a variety of natural language generation tasks, such as question answering and text summarization. Facebook's RoBERTa: RoBERTa is an optimized version of BERT, which was trained on a much larger dataset of text and has...