Crossing the AI Divide

Brian Ritchie, kama.ai, Felicia Anthonio, #KeepItOn coalition, and Dr. Moses Isooba, Executive Director of UNNGOF for Forus Workshop on AI Activism

1. The AI Investment Paradox

The State of AI in Business 2025 report exposes a critical truth about today’s artificial intelligence landscape. Despite over $30–40 billion invested in Generative AI, a staggering 95% of companies report zero measurable return. You read that correctly – ZERO ROI on almost EVERY single GenAI project started for businesses. 

The report calls this the GenAI Divide. Most organizations start with big hopes, move into pilot projects, and then quietly stall before reaching production. The problem is not that AI cannot generate content. Rather, the challenge is that it cannot adapt to real business contexts. These systems are fast but brittle. They are fluent but unreliable. Often they are simply disconnected from the workflows they are meant to support. As the report notes, “Sixty percent of organizations evaluated such tools, but only five percent reached production.” 

Executives are now realizing that fluency alone does not equal value. Business outcomes depend on trust, context, and governance. AI needs to learn continuously, retain context, and integrate with actual business processes. These are not minor gaps! They are structural issues about how AI is being deployed across industries. At kama.ai, these challenges are not new to us. They define exactly what we set to do, and what we solve daily.

 

1. The AI Investment Paradox

The State of AI in Business 2025 report exposes a critical truth about today’s artificial intelligence landscape. Despite over $30–40 billion invested in Generative AI, a staggering 95% of companies report zero measurable return. You read that correctly – ZERO ROI on almost EVERY single GenAI project started for businesses. 

The report calls this the GenAI Divide. Most organizations start with big hopes, move into pilot projects, and then quietly stall before reaching production. The problem is not that AI cannot generate content. Rather, the challenge is that it cannot adapt to real business contexts. These systems are fast but brittle. They are fluent but unreliable. Often they are simply disconnected from the workflows they are meant to support. As the report notes, “Sixty percent of organizations evaluated such tools, but only five percent reached production.” 

Executives are now realizing that fluency alone does not equal value. Business outcomes depend on trust, context, and governance. AI needs to learn continuously, retain context, and integrate with actual business processes. These are not minor gaps! They are structural issues about how AI is being deployed across industries. At kama.ai, these challenges are not new to us. They define exactly what we set to do, and what we solve daily.

 

2. From Fluency to Trust

At kama.ai, we have focus on building AI systems that build trust. Our entire platform was designed to address the specific weaknesses that the report highlights. The study found that “Winning startups build systems that learn from feedback, retain context, and customize deeply to workflows.” This is precisely what the platform does. It uses a continuous feedback loop that lets organizations train and refine their AI based on real-world experience. These aren’t static datasets. Every interaction improves accuracy, relevance, and trust. This process is also build on human-in-the-loop processes, to ensure hallucinations don’t slip in, and accuracy is on point. 

Brian Ritchie, CEO of kama.ai, captured this commitment perfectly in our Responsible AI eBook, saying, “Corporations need to approach Virtual Agents from a socially responsible, sustainable, and ethical perspective.”  That principle became the foundation for our entire Responsible Hybrid AI framework. The system was built around transparency, accountability, and human oversight. These are values that make AI not just powerful, but truly dependable. This is particularly important and true for complex enterprise environments.

 

 

 

3. Hybrid AI: The Real Fix for the GenAI Divide

Kama.ai’s Hybrid AI Agents merge deterministic Knowledge Graphs with governed Generative AI. This combination gives you the guardrails that businesses need. This makes sure fully verified responses are provided where accuracy matters most. But it also maintains flexibility in lower-risk applications. We call this two-part approach GenAI’s Sober Second Mind®. It is a technology that keeps generative creativity under control through human review and advance governance. The system separates factual reasoning from creative exploration and ensures that all AI-generated information is reviewed and approved before reaching customers or employees. Best of all you can set the level of constraints you need. So, a user can be informed when the generative features provide an answer, versus when the 100% correct answer is provided deterministically.

Each answer produced by our Hybrid AI Agents is grounded in a Trusted Collection of curated enterprise data. This data is stored securely, vectorized for quick access, and fully traceable for audits. As our Hybrid AI Agents eBook states, “Hybrid AI turns AI from an unpredictable black box into a transparent, auditable, continuously improving knowledge system.” The table below compares the barriers outlined in the State of AI in Business 2025 report with kama.ai’s solutions.

 

Barrier in the Report Kama.ai Solution How It Works
Lack of memory and learning Knowledge Graphs with feedback loops Retains verified context for every interaction
Misalignment with workflows Human-in-the-loop Hybrid AI Customizes to each department and process
Hallucination and bias risk GenAI’s Sober Second Mind® All content governed and reviewed, or sourced from trusted collections. No Hallucination!
Pilot failure and low ROI Progressive scaling model Starts small, improves with data and human input

 

This structure ensures enterprises don’t repeat the same mistakes that have left so many stalled in pilot purgatory. Hybrid AI delivers the safety of deterministic logic with the agility of generative intelligence. This creates the most reliable model for sustainable AI adoption.

 

4. From Pilots to Performance

The State of AI in Business 2025 report found that successful organizations treat AI vendors as business partners, not just software providers. They focus on operational outcomes rather than model benchmarks. This is exactly how kama.ai collaborates with clients today. We begin by building an initial knowledge base aligned to the organization’s workflows. This starts small and scales as accuracy and confidence grow. It lets the organization and employees adapt, learn, and adopt the system fully. Our clients co-develop Trusted Collections and participate directly in reviewing, refining, and approving content.

This way, risks of random or unverified answers are completely eliminated. It ensures every response aligns with company policy, brand voice, and regulatory standards. No lost sleep, worrying about what your AI is telling customers and employees based on tidbits of misinformation it found on the web. Over time, our clients move from small proof-of-concept pilots to fully operational systems that continuously improve through human feedback. As the Responsible AI eBook explains, “Human oversight is vital to control AI output to reduce bias or misinformation.” That insight reflects our design philosophy: AI should always augment, never replace, human judgment.

 

5. Real Results Without Risk

Responsible AI is not about limiting innovation. Rather, it is about enabling innovation with an eye to accuracy and brand safety. The kama Hybrid AI Agents are already driving real results across HR, marketing, compliance, and customer service. They help employees and customers get faster, more accurate answers while protecting organizational reputation. Because critical responses are governed in advance, there are no hallucinations. With trusted collections (your company information) no bias issues, and no compliance surprises take place. Finally an accurate and trustworthy AI.

Each deployment follows the same principle: trust first, automation second. The system learns from human validation, becoming more accurate with every cycle. It integrates easily with Robotic Process Automation (RPA) to handle complex tasks like onboarding, claims processing, or compliance reviews. As our AI & Complex Tasks eBook explains, “Enterprises don’t need less AI. They need AI they can trust.” That trust translates directly into measurable ROI.

 

6. Crossing the Dividean abstract digital background with a circle in the center

The State of AI in Business 2025 report highlights the growing frustration across enterprises that have invested heavily in AI but achieved little. Many are trapped in pilot mode, searching for a path to real value. Solutions kama.ai develops – provides that bridge. Our Responsible AI framework replaces hallucination with verified truth, replaces guesswork with governance, and replaces risk with measurable returns. It is a practical, proven path to scaling AI responsibly and profitably.

The future of AI will belong to those who value trust as much as technology. Accuracy, empathy, and accountability will define success more than novelty or speed. At kama.ai, we have already built the system that embodies those principles. It is easy to deploy, and a fast process that does NOT become a science project. 

 

 

If your organization is ready to move from pilots to performance, let’s talk. Kama.ai’s Responsible Hybrid AI Agents deliver real ROI with near-zero risk. They transform your enterprise knowledge into trusted, actionable intelligence. Visit kama.ai to learn how to cross the GenAI Divide responsibly, intelligently, and profitably. Think kama.ai for trust, empathy, and accuracy.