Responsible AI in Action – Episode 14: Responsible AI, sensitive information, and high-trust digital transformation, with Conor Smith, Senior Partner, Senior Planner at Narratives
In this episode of Responsible AI in Action, Charles Dimov is joined by Conor Smith, Senior Partner at Narratives, to explore what responsible AI adoption really requires when the information involved is sensitive, consequential, and culturally significant.
Conor brings deep experience working with professional services and purpose-driven organizations, including Indigenous communities, environmental planners, and governance bodies, where data carries real human, legal, and cultural weight. As AI adoption accelerates across industries, the conversation focuses on why responsible AI in high-trust environments demands far more than efficiency gains. It requires clear governance, human judgment, and an honest reckoning with what AI cannot reliably do.
The discussion examines why AI’s probabilistic nature makes it fundamentally unsuited for certain kinds of sensitive work without meaningful human oversight. From the risks of training data bias to the danger of AI systems that confidently answer questions they should decline, Conor highlights how weak governance and unchecked AI outputs can cause real harm — especially when cultural knowledge, lived experience, and community trust are at stake.
From building an internal AI policy that protects human authorship and client copyright, to making the case for composite AI architecture in high-trust environments, this episode explores how organizations can adopt AI responsibly without compromising the integrity of the work.
Episode Highlights
In this conversation, Charles and Conor discuss how responsible AI in sensitive, high-trust environments depends on clear policy, human oversight, and a deep understanding of what AI systems actually do, and what they cannot. The episode explores why probabilistic AI introduces unique risks when working with cultural material, confidential knowledge, and communities where accuracy is non-negotiable.
Key insights include the importance of AI refusal mechanisms and human escalation paths, the role of deterministic AI architecture in grounding generative outputs, and why an AI system that says “I don’t know” is often more trustworthy than one that always produces an answer.
Watch the full episode now:
Key themes include:
- Why sensitive information — cultural knowledge, lived experience, indigenous identity — requires a fundamentally different approach to AI
- The tension between AI speed and accuracy in high-trust professional environments
- How Narratives built an internal AI policy to preserve human authorship and protect client copyright
- Why AI’s bias toward affirmation makes it dangerous for consequential, risk-intolerant work
- The limits of general-purpose LLMs when working with indigenous culture, language, and practice
- The difference between using AI as a productivity tool and using it responsibly as a thinking foil
- How composite AI — combining deterministic and generative elements — addresses the trust gap in sensitive environments
- The critical role of refusal mechanisms, escalation paths, and human judgment in responsible AI systems
As organizations move toward broader AI adoption, those working in sensitive, high-trust environments will need to go further than most. Responsible AI is not just about what AI can do. It is about knowing when it should not, building the governance structures to enforce that, and ensuring that human judgment remains at the centre of consequential decisions.
🎧 Watch Now
Learn more about Conor Smith, Senior Partner, Senior Planner, Narratives
Follow on LinkedIn: Conor Smith | Narratives
Website: narrativesinc.com
