Back to Insights
Blog
Mar 25, 2026

The AI Risks Your Enterprise Isn’t Seeing 

Patrick Zeller & Keith Weisman
Governance
Security
JetStream’s AI advisors, Patrick Zeller and Keith Weisman, have been on the frontlines of privacy, prosecution, and enterprise security. Now they’re asking every organization the same question: “Do you really trust your AI?” 

Between us, we have spent decades prosecuting computer crimes, building global privacy programs for Fortune 100 companies, leading enterprise incident response, and sitting across the table from regulators when things go wrong. We have been on the enforcement side, the in-house legal side, the architecture side, and the field engineering side of AI and data risk — often before the organizations experiencing those risks understood what was happening. 

That background is why what we’re seeing right now concerns us. 

The Mandate That Changed Everything 

Beginning in 2024, corporations across virtually every industry began mandating the use of generative AI as a strategic imperative. The directive was clear: adopt AI or fall behind. Efficiency targets were set. Departments were told to integrate AI into workflows. Budgets were reallocated. And across every function — legal, finance, engineering, marketing, HR — employees started using these tools to do real work, with real data. 

What was far less clear was what those employees were feeding into these systems. 

Generative AI is a radical departure from prior enterprise software. It is not a passive retrieval tool. It is a powerful, active system that processes whatever input it receives, reasons across domains, and — in its most advanced agentic forms — takes autonomous action. That power comes with a set of risks that most organizations have not yet mapped, let alone governed. 

Over the last six months, we’ve presented to rooms full of CISOs and CIOs at many major enterprise conferences. These are sophisticated leaders who believe they have a handle on AI risk. Then we walk them through the full exposure picture. At one recent leadership event, senior security executives told us afterward: “We thought you had one slide on risks and it kept going, and it kept getting worse, and we had no idea the extent of the risks.” 

That reaction is not an outlier. It is the pattern. 

Who We Are and Why This Perspective Matters 

Patrick has spent over twenty years advising Fortune 100 companies on privacy, cybersecurity, and data protection — including global programs at Abbott, Amgen, and Gilead Sciences. Before that, he served as a federal computer crimes prosecutor and state regulator, which means he has seen how compliance failures look from the enforcement side long before they become headlines. He has built four global privacy programs as Chief Privacy Officer and has sat on CIO staff as a cybersecurity attorney — the regulatory side, the in-house legal side, and the architecture side, all from a single vantage point. 

Keith brings thirty years of hands-on cybersecurity and services leadership, beginning with enterprise security consulting at Accenture and PricewaterhouseCoopers before leading complex investigations on behalf of corporations, legal counsel, and regulatory bodies — spanning incident response, computer forensics, eDiscovery, and expert witness engagements. At JetStream, Keith leads the Forward Deployed Engineering team, translating complex technical findings into defensible narratives for legal teams, outside counsel, and executive stakeholders. 

JetStream itself was built by the team behind CrowdStrike, SentinelOne, Attivo Networks, and Dazz — operators who have been through every major security platform shift of the last decade, with over two hundred CISO and CIO meetings during the company’s research phase. This advisory isn’t borrowed credibility. It is built from frontline observation of exactly what it warns against. 

A Framework for Understanding AI Risk 

We organize enterprise AI exposure into three categories: what goes into AI systems, what comes out, and the emerging risks that most organizations haven’t started thinking about yet. Each is deeper than it first appears — and the first category alone contains three distinct risk areas that are already creating real legal and regulatory liability. 

Input Risks: What You’re Putting In 

This is where most organizations first realize the scale of their exposure. The 2024 AI mandates sent employees across every function into generative AI tools — often without any visibility from security or legal. What they’re putting into those systems falls into three critical risk areas. 

Trade secrets and corporate confidential information

Employees are pasting proprietary information directly into AI prompts: unannounced merger targets, unreleased financial forecasts, product formulas, engineering specifications, competitive pricing architectures, internal strategic roadmaps. Trade secret protection under the Defend Trade Secrets Act and state-level equivalents requires companies to take reasonable measures to maintain secrecy. Entering proprietary information into a third-party AI tool without adequate safeguards may legally relinquish that protection — permanently. Beyond disclosure, many AI tools retain user inputs to improve underlying models. A company’s confidential information entered today could surface in a response generated for a competitor tomorrow. In regulated industries — financial services, healthcare, defense contracting — the exposure multiplies across securities law, HIPAA, and national security frameworks. 

Personal information of employees and customers

In the normal course of business, employees process extraordinary volumes of personal data: personnel files, medical records, payroll data, immigration status, customer transaction histories. When any of this enters a generative AI system, it triggers a complex cascade of legal obligations. Under the GDPR, routing personal data through a third-party AI constitutes a new processing activity requiring legal justification, a formal data processing agreement, and often a data protection impact assessment. In the United States, the California Privacy Rights Act, Illinois’ Biometric Information Privacy Act — which has already generated billions in class action liability — and dozens of state breach notification laws create overlapping obligations that most enterprise AI deployments have not addressed. A vendor breach exposing inputs could simultaneously trigger notification requirements across multiple jurisdictions — for personal data the enterprise did not even realize it had shared. 

Attorney-client privilege

 Of all AI risks to the enterprise, the erosion of privilege is the least understood outside legal departments — and among the most strategically catastrophic. Privilege shields confidential communications between lawyers and clients from compelled disclosure. It is what allows companies to seek candid legal advice about regulatory exposure and litigation risk without fear that those conversations will be weaponized against them. The protection is fragile: it is waived by disclosing privileged communications to third parties, and generative AI systems are unambiguously third parties under prevailing legal analysis. A general counsel who pastes a litigation strategy memo into an AI tool to request a summary has potentially waived privilege over that document. Bar associations across multiple states have issued ethics guidance warning attorneys of their duties of confidentiality when using AI with client information. Courts are now being asked to determine whether AI vendors qualify as functional agents of the attorney-client relationship — with enormous litigation consequences on the answer. 

Output Risks: What’s Coming Back 

The input side gets the most attention, but the output side creates a different kind of exposure — one that can be harder to detect and equally damaging. AI-generated content can contain hallucinations presented as fact. It can recommend trademarks or copyrights already held by others, creating IP contamination that an organization may not catch before it ships. And when AI is used in automated decision-making — hiring, firing, credit decisions, insurance underwriting — the bias embedded in those outputs creates legal liability that is only beginning to be tested in court. The challenge is that AI outputs often look correct. A recommendation seems reasonable. An analysis appears well-sourced. But there is no accountability trail connecting the output to the data that produced it, and no governance layer verifying compliance with the organization’s legal obligations. 

Emerging Risks: The Frontier Most Organizations Haven’t Mapped 

The third category consistently surprises the rooms we present to — and it is evolving faster than the first two. 

Recording and transcription tools now capture everything said in a meeting, often without proper consent from all parties. AI-generated meeting summaries create verbatim records of confidential conversations, potentially destroying trade secret protections that depend on information not being widely documented. These tools are running in the background of most enterprise environments right now, creating exposure that no one is tracking. 

But the most consequential emerging risk is the rise of autonomous AI agents. Think of it this way: AI agents are virtual employees. They have identities, they make decisions, they access sensitive data. But unlike human employees, they can’t be fired, they can’t be held personally accountable, and they don’t think twice before acting. 

When an AI agent operates through protocols like Model Context Protocol, it is no longer merely processing inputs — it is acting on behalf of the enterprise, with access to the same systems and data that human employees use. It can read and write files, query databases, send communications, execute code, and interact with enterprise software autonomously. Every connection represents a potential pathway through which an agent can access sensitive information, execute transactions, or interact with external parties — ungoverned. 

Organizations have spent decades building controls around human behavior. They need to apply that same rigor to the AI systems now operating alongside them. And this list of emerging risks is not static. We are actively developing new risk categories as the landscape evolves. 

The CISO/CTO Divide 

In every enterprise we talk to, we see the same tension: the CTO is charged with driving AI innovation forward, and he CISO understands the value but is forced to either say no or accept risk they don’t want to carry — because the governance infrastructure to do it safely doesn’t exist yet. That tension slows everyone down. Security teams aren’t trying to block progress. They’re trying to avoid being in a position where the only options are stopping the business or signing off on exposure they can’t control. That conflict slows everyone down. Security teams aren’t trying to block progress — they’re trying to avoid approving a system that creates the next breach, the next lawsuit, or the next regulatory action. 

Enterprises don’t need to choose between AI innovation and security. What they need is a structured way to understand where their exposure actually sits — across all three risk categories — so the conversation can shift from “should we do this?” to “how do we do this safely?” 

That is what we built JetStream AI Advisory to provide. When executives see the full scope of their AI exposure, the conversation shifts from “let’s schedule a follow-up” to “how soon can we start.” We’re seeing CISOs pull their CIOs into meetings within hours of a single briefing — not because we told them to be worried, but because they saw for themselves what they’d been missing. 

What Comes Next 

This is the first in a series of articles exploring the AI risk landscape through the lens of what we’re seeing in the field every day. In the pieces ahead, we’ll go deeper into each risk area — the specific patterns, real-world consequences, and governance gaps that enterprises need to understand before their next board meeting, their next audit, or their next AI deployment. 

The organizations that move first won’t just be safer. They’ll be the ones who can actually accelerate AI adoption, because they’ll have the governance infrastructure to do it with confidence. 

JetStream AI Advisory is an expert-led advisory offering from JetStream Security, designed for CISOs, CIOs, General Counsel, and Chief Privacy Officers navigating enterprise AI risk. Led by Patrick Zeller, General Counsel, and Keith Weisman, Head of Forward Deployed Engineering, the advisory provides structured guidance to help organizations assess their AI exposure across legal, privacy, security, and operational risk — and build a path to control. JetStream AI Advisory is not a consulting engagement or legal service. It is a focused engagement built on frontline expertise. To learn more or request a conversation, contact aiadvisory@jetstream.security. 

Explore more insights

See all Insights
Blog
Apr 20, 2026
The AI Inventory Problem Your Security Stack Wasn't Built to Solve
You can’t govern what you can’t see. Here’s why ephemeral discovery, a centralized AI Hub, and browser plugins outperform…
Blog
Apr 16, 2026
The Security Agent Deployment Trap: Why Enterprise AI Governance Doesn’t Need Another Endpoint Agent 
Executive Summary  The average large enterprise runs 43 cybersecurity tools, and the majority require a persistent software…
Blog
Apr 9, 2026
Governing the MCP Sprawl: Four Risks Every Engineering Team Is Ignoring
MCP servers turned AI from advisors into operators. Enterprise risks are compounding fast and most teams have zero…