When AI Gets Culture Wrong

Brian Ritchie, kama.ai, Felicia Anthonio, #KeepItOn coalition, and Dr. Moses Isooba, Executive Director of UNNGOF for Forus Workshop on AI Activism

When AI Gets Culture Wrong
Why Auditability and Governance Matter More Than Ever

Artificial intelligence is quickly becoming our primary source of truth. For answers to all of our questions, simple and complex, Google used to be the exclusive source of truth. But now it is no longer the search engine’s exclusive domain. From education to cultural exploration, people rely on AI more and more. This also includes interpreting language, history, and identity.

But when AI gets it wrong, the consequences go far beyond a minor technical error. This is especially true in culturally sensitive contexts. Mistakes here impact trust, and authenticity. It even erodes as the preservation of the specialized knowledge itself.

A recent CBC News article: “Be wary of AI-generated content on Indigenous cultures, say experts”  highlights growing concerns. Researchers and Indigenous community leaders highlight the risks of AI-generated content for Indigenous languages and cultural teachings. The findings point out that without governance, AI does not just make mistakes – it causes harm. 

 

When AI Gets Culture Wrong
Why Auditability and Governance Matter More Than Ever

Artificial intelligence is quickly becoming our primary source of truth. For answers to all of our questions, simple and complex, Google used to be the exclusive source of truth. But now it is no longer the search engine’s exclusive domain. From education to cultural exploration, people rely on AI more and more. This also includes interpreting language, history, and identity.

But when AI gets it wrong, the consequences go far beyond a minor technical error. This is especially true in culturally sensitive contexts. Mistakes here impact trust, and authenticity. It even erodes as the preservation of the specialized knowledge itself.

A recent CBC News article: “Be wary of AI-generated content on Indigenous cultures, say experts”  highlights growing concerns. Researchers and Indigenous community leaders highlight the risks of AI-generated content for Indigenous languages and cultural teachings. The findings point out that without governance, AI does not just make mistakes – it causes harm.

The Problem: AI That Sounds Right, But Isn’t

Generative AI systems are designed for fluency. They produce answers that sound confident, coherent, and credible. Here lies the problem. These LLM (Large Language Model) systems are NOT designed for truth.

As highlighted in the CBC article, researcher Michael Sherbert warns that AI systems – particularly when trained on limited datasets – can generate entirely fabricated words, teachings, or cultural narratives. In Indigenous contexts, where datasets are often incomplete or fragmented, this risk is amplified. The results are often a convincing illusion.

Users who are not deeply connected to a specific community may trust these responses. The language appears authentic. The tone feels authoritative. Yet the content may be incorrect, generalized, or entirely constructed.

Sherbert describes how this can flatten distinct Indigenous identities into a single, “pan-Indigenous” narrative. In effect, AI is not just misinforming – it is reshaping the cultural understanding in ways that are inaccurate and simply inappropriate. This is not just a data issue. It is a governance issue.

 

Root Cause: AI Without Accountability

At the core of the problem is how generative AI works.

Large Language Models (LLMs) are probabilistic systems. They generate responses based on patterns in data, not verified knowledge. When that data is sparse or culturally sensitive, the likelihood of hallucination increases. This is not theoretical.

According to Stanford HAI (2025) research, hallucination rates for task sets that contain both simple and complex cases range from 3% to 20%. Even at the low end, this level of error is unacceptable in domains where accuracy and authenticity are critical.

More importantly, these systems lack inherent accountability. As Brian Ritchie, Founder and CEO of kama.ai, points out in the CBC article, one of the biggest challenges is governance. How do we ensure that AI does not produce biased, incorrect, or culturally offensive outputs?

Today, there are no consistent answer outside deterministic systems like kama.ai’s. Users cannot easily determine whether content is authentic. Even references generated by LLM-based AI systems (probabilistic) can be fabricated. This creates a fundamental disconnect between perceived reliability and actual accuracy.

 

Why This Matters

Culture is not just data. Indigenous languages and cultural teachings are not interchangeable mathematical concepts. They aren’t merely datasets. They are living systems of knowledge, grounded in community, context, and tradition.

When AI generates incorrect cultural content, the impact is significant. It can undermine language revitalization efforts. It can distort teachings passed down through generations. It can erode trust in digital tools that are meant to support communities.

The CBC article reinforces this point through voices like Kaitlyn Lazore. She emphasizes that authentic cultural understanding requires real engagement with community. It isn’t about taking shortcuts through technology. AI, when used without governance, creates the illusion of access without the depth of understanding. This hits at the foundation of why auditability and control are essential.

 

The Alternative

There is a more responsible path forward.

Rather than relying solely on probabilistic AI, some communities are adopting structured knowledge systems. This is a category of deterministic AI systems called Responsible AI Agents. These systems are built on curated, verified content, where authority remains with the community itself.

Sherbert notes, when AI needs to be grounded in structured knowledge rather than pattern prediction. When this is done, the likelihood of fabricated or misleading outputs drops significantly. More importantly, communities retain control over what knowledge is shared and how it is represented. This approach aligns directly with Responsible AI principles.

At kama.ai, this is implemented through deterministic Knowledge Graph AI combined with governed generative capabilities. Together we refer to this as GenAI’s Sober Second Mind™. It is all part of the Responsible Composite AI Agent technology developed by kama.ai. Trusted Collections ensure that data sources are curated and approved. Process flows are defined in advance. Outputs are traceable and auditable. As a result, AI does not operate freely. It operates within governed boundaries.

 

Auditability: The Foundation of Trust

Auditability is what makes this approach viable. Every AI response needs to be traceable. Where did the information come from? How was it generated? What sources were used? Who governs the content?

Without these answers, trust simply won’t exist. 

The CBC article underscores how difficult it is for users to assess whether AI-generated information is accurate or authentic. This is precisely the gap auditability is designed to close.

When every output is linked to a verified source, when decision paths are visible, and when governance is embedded into the system, AI becomes an accountable system. Without auditability, AI systems will continue to be a black box that cannot consistently be trusted.

 

Moving Forward: Responsible AI for Cultural Integrity

AI has the potential to support language revitalization and cultural preservation. It can expand access, enhance education, and help scale knowledge sharing.

But this is only true if it is designed responsibly. This means:

  • Embedding governance from the start
  • Ensuring community ownership of data
  • Implementing full auditability and traceability
  • Using deterministic systems where accuracy is critical
  • Being transparent about how AI is used

The future of AI in cultural contexts will not be defined by capability alone. It will be defined by responsibility. If an AI system cannot demonstrate where its knowledge comes from, how it was generated, and who governs it; it should not be trusted! This is especially true of domains where authenticity matters most.

At kama.ai, we believe AI needs to be accountable to the communities it serves.

When it comes to culture, being “mostly right” is just not enough.

It has to be provably right.

 

When it’s GOT to be right, it’s GOT to be kama.ai