Episode Sixteen with Brian Ritchie

Brian Ritchie, kama.ai, Felicia Anthonio, #KeepItOn coalition, and Dr. Moses Isooba, Executive Director of UNNGOF for Forus Workshop on AI Activism

Responsible AI in Action – Episode 16: Zero Tolerance AI, Auditability, and Enterprise Control, with Brian Ritchie, kama.ai

In this episode of Responsible AI in Action, Charles Dimov is joined by Brian Ritchie, CEO and Founder of kama.ai, to explore one of the most pressing challenges in enterprise AI: achieving accuracy, accountability, and auditability in high-stakes environments.

Brian brings deep expertise in building Responsible AI systems that prioritize transparency, predictability, and trust. As organizations rapidly adopt AI technologies, this conversation examines why traditional generative approaches alone are not sufficient for enterprise use, particularly in regulated industries where even small errors can carry significant consequences.

The discussion explores the growing demand for zero tolerance AI, where systems must consistently deliver accurate outcomes without hallucinations or unintended actions. From financial services to healthcare and customer-facing applications, Brian explains how different AI architectures—deterministic and probabilistic—impact reliability, and why organizations must carefully choose how these systems are deployed.

From knowledge graphs and trusted collections to the importance of auditability and governance, this episode highlights how enterprises can move beyond experimentation and toward fully controlled, production-ready AI systems

Episode Highlights

In this conversation, Charles and Brian explore how Responsible AI requires more than performance metrics—it requires control, traceability, and intentional system design. While generative AI offers flexibility and speed, it also introduces variability and risk, making it difficult to guarantee consistent outcomes in enterprise environments.

The episode examines how deterministic AI approaches, such as knowledge graphs, provide a foundation for predictable and auditable systems, enabling organizations to deliver precise, controlled responses. At the same time, generative AI can still play a role when constrained by trusted data sources and paired with clear transparency for users.

Brian emphasizes that auditability is not just about reviewing past actions, but about enabling accountability across the entire lifecycle of an AI system—from who approved data inputs to how responses are generated. This level of governance is essential for organizations operating in regulated industries or managing high-value customer interactions.

Key insights include the distinction between experimentation and true enterprise deployment, the importance of human oversight in high-risk scenarios, and why many AI initiatives fail to deliver measurable outcomes due to a lack of control and governance.

Watch the full episode now:

Key themes include:

  • Why zero tolerance for AI errors is becoming a requirement in regulated industries
  • The difference between deterministic and generative AI, and how each impacts risk
  • How hallucinations occur and why they cannot be fully eliminated in probabilistic systems
  • The role of knowledge graphs in delivering predictable and auditable AI outcomes
  • What auditability really means, including logging, traceability, and data governance
  • How trusted collections help reduce risk in generative AI systems
  • The distinction between AI experimentation and fully deployed enterprise solutions
  • Why up to 95% of AI projects fail to achieve expected outcomes
  • The importance of human oversight in high-impact and high-risk decision-making
  • How brand trust, compliance, and accountability are directly tied to AI system design

As organizations continue to integrate AI into core operations, the challenge is no longer just about innovation—it is about control, reliability, and accountability. Ensuring that AI systems are auditable, transparent, and aligned with business and regulatory requirements will define successful enterprise adoption.

Responsible AI is not just about generating answers. It is about ensuring those answers are accurate, explainable, and fully accountable.

🎧 Watch Now

Learn more about Brian Ritchie, CEO & Founder, kama.ai

Follow on LinkedIn: Brian Ritchie  |  kama.ai