When AI Touches Sensitive Information

Brian Ritchie, kama.ai, Felicia Anthonio, #KeepItOn coalition, and Dr. Moses Isooba, Executive Director of UNNGOF for Forus Workshop on AI Activism

When AI Touches Sensitive Information, “Mostly Fine” Is a Liability
Responsible Composite AI Agents: the Only Viable Path

Artificial intelligence is entering a new operational phase. AI systems now influence real decisions and real outcomes across organizations. And, we are only at the very beginning of this new phenomenon. AI agents no longer sit at the edges of work or operate only in low-risk environments. AI increasingly touches sensitive information, and that shift changes everything.

In these environments, accuracy is not optional. It is foundational. “Mostly fine” is not acceptable, because the cost of an error is no longer abstract. Sensitive client issues cannot merely be reduced to data points without considering risk. When AI participates in decisions involving people, communities, or regulated systems, “mostly fine” can become problematic. This is where conventional generative AI fails, and where Responsible Composite AI agents become essential.

When AI Touches Sensitive Information, “Mostly Fine” Is a Liability
Responsible Composite AI Agents: the Only Viable Path

Artificial intelligence is entering a new operational phase. AI systems now influence real decisions and real outcomes across organizations. And, we are only at the very beginning of this new phenomenon. AI agents no longer sit at the edges of work or operate only in low-risk environments. AI increasingly touches sensitive information, and that shift changes everything.

In these environments, accuracy is not optional. It is foundational. “Mostly fine” is not acceptable, because the cost of an error is no longer abstract. Sensitive client issues cannot merely be reduced to data points without considering risk. When AI participates in decisions involving people, communities, or regulated systems, “mostly fine” can become problematic. This is where conventional generative AI fails, and where Responsible Composite AI agents become essential.

Sensitive Information Changes the Rules

A close-up of a person's hand holding a pen over a laptop, with another person holding a tablet in the background.

Sensitive information refers to any data, narrative, or knowledge where misinterpretation, misuse, or inappropriate disclosure can cause material harm, even if the information is factually accurate. But, sensitive information is not defined by regulation alone. It is defined by consequence. Sensitivity comes up when errors can cause harm, whether that harm is personal, cultural, legal, or reversible. In many cases, the impact of misuse far outweighs the intent behind it.

Sensitive information often includes personal identity data, health records, legal documentation, financial details, and employment information. It also includes cultural, Indigenous, and community-owned knowledge, where ownership and stewardship matter as much as factual accuracy. These contexts carry power imbalances, expectations of care, and moral responsibility.

Were AI to mishandle such information, such errors would not be technical defects. They are trust failures. The damage lingers through reputational harm, legal exposure, and long-term erosion of confidence. In these environments, restraint is not a limitation. It is a requirement.

 

Does Generative AI Fails in a Sensitive Context?

Generative AI systems are designed to optimize for linguistic probability. They predict plausible language issues rather than enforce truth, ethics, or judgment. In sensitive environments, this distinction becomes critical. In these cases, fluent responses can mask deep uncertainty or contextual misalignment.

Generic LLM (large language model) systems lack intrinsic awareness of consequence. They do not know when silence is safer than speech, when escalation is required, or when a response may cause harm even if it appears factually correct. This is not a tuning issue. It is a structural limitation.

Research reinforces this reality. Based on 3000 observations, Reuters & BBC in October 2025 noticed that, “Nearly half of AI assistant responses to news-related queries contained significant inaccuracies.” In sensitive domains, this level of error rate is simply unacceptable. At scale, it explodes into an operational liability rather than a technical imperfection.

 

What Is a Responsible Composite AI Agent?

Responsible Composite AI takes a fundamentally different approach. It reduces the reliance an organization has on a single probabilistic model (think: Large Language Models). Instead, it combines a few intelligence layers and technologies, each with a clearly defined role and accountability model.

Deterministic intelligence gives us verified, non-probabilistic answers that are pre-approved and fully trusted. These are not subject to hallucinations, bias, or other trust eroding problems which are a problem for most GenAI (LLM) focused solutions. Governed generative AI supports exploration, but only within curated, trusted sources, with explicit boundaries. Human oversight is important, with escalation triggers when ambiguous, ethically uncertainty, or risky questions come up.

Every response is auditable. Every action is traceable. Accountability is built into the system, not added later. Such architectures don’t try to make generative AI safer through prompts. Instead, they place generative AI where it belongs. It goes within a system designed to be right by design, with governance, human-judgment, and guardrails – to keep it safe.

 

Architectural Governance

In high-risk environments, governance can’t bolted on later. It needs to be part of the architecture itself. Sensitive domains need clear separation between exploratory dialogue and authoritative responses. They need explicit permission models and role-based behavior enforced throughout the system.

Users need to understand how answers are produced. They need to know what is sanctioned and human confirmed, and what is generated. Most importantly, systems also need to know when NOT to answer. The ability to refuse is as important as the ability to respond. That’s where today’s generative AI systems often fail. When they do not have a firm answer, the LLM will often hallucinate and make it up, but sound convincing to users. 

We aren’t talking about slowing innovation. It is about preventing harm at scale. The Stanford AI Index Report 2025 confirms the urgency. In 2025, AI-related privacy and security incidents increased by 56.4% year over year, across 233 incidents. These simply aren’t abstract risks. They are failures that are already crying out for better and more robust AI solutions.

 

an image of Somia from Narratives Inc.Narratives and Responsible Composite AI Align Naturally

Narratives are not datasets. They are lived realities shaped by memory, identity, and power. When organizations work with narratives, accuracy alone isn’t enough. This means, context, and ownership matter just as much. As Somia Sadiq, founder of Narratives points out, “Stories when received responsibly give us windows into peoples’ lives, their memories, their lives realities, their voices. It is these stories that can enrich our work for the future generations, and there’s not much room to muck around with that”.

Responsible Composite AI agents when setup correctly, preserve context and respect stewardship. It supports human judgment rather than replacing it. It enables reflection instead of automation-first. This makes it well suited to sensitive narrative environments, where trust needs to be continually earned.

Automation without context erodes meaning. Responsible Composite AI agents delivered by kama.ai – avoid these traps by design.

 

From “Can We?” to “Should We?”

AI maturity is changing. Speed and scale don’t define progress. Judgment does. The most trusted systems are those that understand their limits. Well designed systems are designed to escalate uncertainty and refuse unsafe actions.

As AI moves deeper into sensitive domains, standards are going to rise. Responsible Composite AI agents become a baseline. This is true not because it is safer in theory, but because it survives in practice. In environments where trust is non-negotiable, precision matters. “Mostly right” is not progress. It is a liability.

 

Focus on Responsibility

Sensitive information requires better systems. It deserves intentional design that prioritizes trust, accountability, and restraint. Organizations working with high-risk data need to ask whether their AI can refuse, explain, and escalate responsibly.

Responsible Composite AI agents are no longer merely an option. Responsible AI is something that all organizations need to think about carefully. It is the only grounded path forward. At kama.ai, we build AI for trust. Accuracy, empathy, and accountability guide every layer. Partners like Narratives understand this and how to correctly leverage and apply this technology in sensitive information situations.

 

If it has to be right, it has to be kama.ai.

Let’s build AI that understands its responsibility.