Contact centers have long been some of the most data-dense environments in modern enterprises.
Every day, they process personal identifiers, payment-adjacent details, behavioral signals, and emotionally charged context shared in moments of stress. The sheer number of global regulations, internal policies, and vendor controls required to safeguard that data is staggering and only growing in complexity, enforcement, and penalty.
Even under the operating models of twenty years ago, getting data security right in customer support was a heavy lift. Now layer AI on top of that.
Agent assist, auto-summaries, conversational bots, predictive routing, and fully automated virtual agents have moved from experimentation to production at breathtaking speed. For many support leaders, the idea of introducing AI into already high-risk environments feels like pouring gasoline on a fire.
But this is where the story gets interesting.
Despite the growing technical complexity of data security, the most common point of failure has remained remarkably consistent throughout call center history: us. Yeah, the humans.
A few firsthand examples:
I witnessed one of the clearest cases of this early in my career at America Online. In 1998, a case that would later popularize the term social engineering—McVeigh v. Cohen—exposed just how dangerous small, seemingly inconsequential data disclosures, offered as “help” by the agent, can be. A U.S. Navy investigator repeatedly contacted AOL customer service and, by piecing together minor bits of information from different representatives, obtained login credentials and access to a servicemember’s AOL account, thus gaining access to his personal emails, chats, and internet browsing history. Those details were used to infer the individual’s sexual orientation, triggering an attempted discharge under the military’s “Don’t Ask, Don’t Tell” policy. The outcome was a federal injunction, a public settlement, and AOL’s acknowledgment that its customer service processes had failed to protect customer privacy.
While working at BioWare on a large-scale Star Wars online game, I saw how emotionally charged service decisions, such as account suspensions, account terminations, and other terms of service violation penalties, could spill beyond the screen into my own and my team’s lives. Upset players repeatedly contacted support agents, escalated anger into harassment, maliciously doxed agents and leadership, accosted us at our homes, and engaged in physical violence. All of which happened, in spite of our warnings, best efforts, best intentions, and best practices. These weren’t data breaches in the classic sense, but they underscored the same truth: when human interactions are poorly controlled, risk expands consequentially.
More recently, the consequences of human access failure have affected a company I helped build at an even larger scale! At Coupang, one of South Korea’s largest technology companies, a former Chinese employee retained internal access through an active authentication key long after leaving the organization. Over months, that access was used to retrieve personal data associated with tens of millions of customers, including names, addresses, phone numbers, emails, and order histories. The breach ultimately triggered executive departures, regulatory scrutiny, US diplomatic tensions, and comments from the standing US Vice President, not to mention the widespread public fallout in Korea. Once again, the root cause was not a rogue algorithm. It was human access that was never properly revoked.
And that’s just it.
AI systems introduce new risks, yes, of course. But they also eliminate entire classes of human-driven failure. While deploying AI into customer experience requires rigorous design, testing, and governance, the likelihood that an AI agent will intentionally sell data, disclose a colleague’s home address, or socially engineer its way around policy is vanishingly small.
Not zero. Nothing is ever zero.
But critically, the failure modes of AI are bounded by logic, constraints, and prompts. When an AI system behaves incorrectly, when a summary includes too much information, a response crosses a boundary, or an escalation is missed, the fix is often structural and immediate. A policy change, a prompt adjustment, a guardrail is added, once and enforced everywhere in real time.
Correcting human behavior is far harder.
Humans require repeated training, constant auditing, ongoing quality monitoring, and still operate under fatigue, emotion, and pressure. AI, by contrast, does not get tired, does not improvise outside its constraints, and does not “forget” its training between contacts. This does not mean generative AI is risk-free. Unlike traditional automation, GenAI doesn’t just retrieve or route, it creates. Errors can sound confident. Summaries can persist beyond the moment. Copilots can subtly influence decisions without agents realizing it.
The most effective organizations don’t try to eliminate these risks. They design for containment. They define what AI is allowed to see, what it is allowed to generate, and most importantly, when a human must be accountable for the outcome. They build systems that can be audited, corrected, and improved centrally, rather than relying on thousands of individual interactions to go perfectly every time. As AI becomes more embedded in contact centers, trust will no longer be a byproduct of good intentions or strong training alone. It will be an engineered outcome, built through constraints, transparency, and rapid correction.
In the age of intelligent CX, the question is no longer whether AI introduces risk. The real question is whether leaders are willing to replace fragile, human-dependent controls with systems that can be corrected, governed, and improved at scale.