---
title: "What Boards of Directors Are Actually Asking About AI Risk"
id: "2244"
type: "insights"
slug: "what-boards-are-asking-about-ai-risk"
published_at: "2026-05-08T19:21:10+00:00"
modified_at: "2026-05-08T19:26:51+00:00"
url: "https://jetstream.security/insights/what-boards-are-asking-about-ai-risk/"
markdown_url: "https://jetstream.security/insights/what-boards-are-asking-about-ai-risk.md"
excerpt: "This post is authored by Patrick E. Zeller and Keith Weisman. Patrick is General Counsel at JetStream Security and a legal and compliance executive with more than 25 years advising Fortune 100 companies on AI governance, privacy, cybersecurity, and compliance...."
taxonomy_content_type:
  - "AI Advisory"
  - "Blog"
taxonomy_topic:
  - "AI Advisory"
---

[Back to Insights](/insights)
AI Advisory

May 08, 2026

### What Boards of Directors Are Actually Asking About AI Risk

Patrick Zeller & Keith Weisman

AI Advisory

***This post is authored by Patrick E. Zeller and Keith Weisman. Patrick is General Counsel at JetStream Security and a legal and compliance executive with more than 25 years advising Fortune 100 companies on AI governance, privacy, cybersecurity, and compliance. Keith leads the Forward Deployed Engineering team at JetStream Security, where he works directly with enterprise customers on AI deployment, drawing on more than 30 years of hands-on security engineering and field experience.***

On May 5, JetStream Security presented to the [National Association of Corporate Directors (NACD)](https://www.nacdonline.org/)
, the largest organization representing public and private board members in America. The questions that came in from the audience told a clear story. Most organizations in the room had an AI policy. Yet only a third were performing AI discovery or maintaining a manifest of what was actually running. The gap between policy and practice is where real work begins.   
   
What follows is a summary of the questions directors asked, and the answers that came back from the stage. Taken together, they map the issues boards are actively wrestling with right now, from privilege and legal holds to vendor controls, workflow drift, and what advancing AI capabilities mean for the speed of cybersecurity response. If your board has been circling these topics without a clear framework, this is the framework.

##### **“What’s the difference between public AI and enterprise AI — and does it matter?”**

The distinction matters more than most organizations realize, and price is not the dividing line. A paid consumer subscription is still a public AI tool. The defining characteristic is what happens to the data you put in. Public AI platforms collect user inputs, use them to train their models, and reserve the right to disclose them to third parties, including government regulators. That is not a hypothetical risk buried in fine print. It is standard terms of service for most consumer-grade tools.

An enterprise account operates differently. The data does not leave the organization; it is not used for training, and the vendor contractually commits that, but even enterprise accounts require scrutiny. A large organization with tens of thousands of employees across multiple countries is still sharing data within its own environment. Who has access to what, and which internal systems are connected to the AI, are questions the security review needs to answer before employees start using the tool, not after.

The short version for boards: if your organization has not formally verified the data handling terms of every AI tool your employees are using, you do not know what you have.

##### **“Can our employees lose attorney-client privilege by using AI?”**

Yes. And a federal court has already ruled on it.

In [United States v. Heppner (S.D.N.Y., February 2026)](https://harvardlawreview.org/blog/2026/03/united-states-v-heppner/)
, Judge Jed Rakoff addressed what he called “a question of first impression nationwide”: whether communications between a user and a publicly available AI platform are protected by attorney-client privilege or the work product doctrine. His answer was no, on three grounds.

First, the AI is not an attorney, and “privilege” requires communication with counsel. Second, the defendant had used a public version of the platform, whose privacy policy explicitly permitted the provider to collect user inputs, train on them, and disclose them to third parties, including government regulators. That destroyed any reasonable expectation of confidentiality. Third, the defendant had not been directed by his attorney to use the tool. He acted on his own initiative, which defeated the work product claim.

The practical implication is direct: An employee who inputs sensitive legal matters, case strategy, or privileged information into a public AI tool may have waived privilege over that material. Judge Rakoff did note that the outcome might differ if counsel had directed the use of an enterprise-grade tool with genuine confidentiality protections. But that open question is not a safe harbor. It is an argument that has not yet been tested.

Organizations that have not thought through how their AI tools interact with privileged information, legal holds, and ongoing litigation should move that conversation forward before their next board meeting.

##### **“Are AI prompts subject to legal holds?”**

They can be, and courts are actively ruling that they are.

When a company is in litigation and AI use is potentially relevant to the dispute, there is an affirmative duty to preserve that information. That duty is not limited to documents in a traditional sense. It extends to AI prompts and outputs if they could lead to the discovery of relevant information. Judges are enforcing this.

The practical consequence is that organizations need a mechanism to preserve AI data for the duration of litigation, or until relevance can be ruled out. In our field experience, many organizations are already retaining prompts for 30 to 60 days as a baseline for internal investigation purposes. For litigation, that window can extend significantly longer.

If your company is sued or even reasonably expects to be sued, any searches your employees have run through AI tools like ChatGPT, Microsoft Copilot, or Claude may need to be saved and handed over in court, just like emails or text messages. Courts have made clear that AI prompts (what you type in) and AI outputs (what the AI responds with) are treated as regular business records subject to discovery, and no special exemption exists just because the data came from an AI tool.

That means the moment litigation is on the horizon, you likely have a legal duty to stop those AI chat histories from autodeleting and to keep them preserved for the entire length of the case. That’s a significant commitment: complex cases may easily exceed five to six years, including an appeal. (Or longer. [Halliburton Co. v. Erica P. John Fund](https://ir.halliburton.com/news-releases/news-release-details/halliburton-reaches-settlement-securities-class-action-lawsuit)
 ran for more than 14 years, through two trips to the Supreme Court, before settling in 2016.)

During that entire window, your AI query logs could be sitting under a litigation hold, meaning employees can’t delete them; platforms can’t purge them on their normal schedules, and the opposing party may ultimately be entitled to see them. The practical takeaway is simple: treat anything typed into an AI tool the same way you would treat a company email, because a judge already does.

The legal precedent here is moving quickly. In a [February 2026 analysis](https://www.klgates.com/Litigation-Minute-Is-AI-Generated-Content-Discoverable-What-Companies-Need-to-Know-in-2026-2-12-2026)
, attorneys at K&L Gates note that courts are not carving out exemptions for AI-generated content, traditional discovery rules apply. They point to *In re OpenAI, Inc., Copyright Infringement Litigation*, in which a federal magistrate judge in the Southern District of New York compelled the production of millions of GenAI logs, including user prompts and model responses. The takeaway from the K&L Gates team is that companies should be disabling autodelete on AI tools, exporting chat histories, and folding GenAI data into their existing legal hold procedures now, not after litigation arrives.

This is not a future requirement. It is the current one. The question for your legal and compliance teams is whether your AI governance infrastructure can actually produce that data if it is requested.

##### **“How do we know if our enterprise AI vendor has good controls?”**

Start with the contract. The vendor agreement should specify whether your data is used for model training (it should not be, in a proper enterprise setup), who can access it, and under what circumstances it might be disclosed. If the agreement is ambiguous on any of those points, that ambiguity is itself the answer.

Beyond the contract, the internal architecture matters as much as the vendor relationship. Who is connected to which data sources through the AI? Many organizations are discovering that the AI deployment that started in one department has spread connections to databases that were never part of the original scope. Finance, legal, and HR data warrant particular attention, both because of their sensitivity and because of the regulatory exposure that follows if they are mishandled.

A number of organizations have responded by deploying purpose-specific AI environments for their legal and finance teams, firewalled by the broader organization. That approach trades operational efficiency for meaningful data separation. Whether that tradeoff makes sense depends on the risk profile of the data those teams handle.

##### **“What is drift, and why should the board care about it?”**

With so many terms and the rapid pace of AI evolution, it’s difficult to stay on top of terminology and concepts. Hallucinations are outputs: the AI gives you an incorrect or fabricated answer. Drift is structural: an agentic workflow begins doing something it was not designed or approved to do.

An agentic workflow is an automated sequence of actions that an AI system carries out on your behalf. Onboarding a new employee through your HR system is a clean example: the workflow creates accounts, configures access, initializes payroll, and handles the series of steps that a human used to do manually. When that agentic workflow is deployed, it has an approved scope. Drift is what happens when the workflow expands beyond that scope without explicit approval.

Say your organization has approved a specific HR AI workflow to connect to your Workday platform for HR functions. You go back and check six months later, and you find the agentic workflow has changed and is now using unapproved data sources and granted access to unapproved individuals. The data exposure from those connections is real, but no one identified the change because no one was watching it. That is drift. And the difficulty of monitoring is not a reason to accept it. It is a reason to build the visibility infrastructure that makes monitoring possible.

For boards, the right question is not “can we stop AI from changing?” It is “do we have a platform and a process that tells us when approved AI systems have changed, and can we verify that those changes are within policy?”

##### **“What should boards actually be asking about AI risk?”**

The governance framework for AI risk looks a lot like the one boards already use for cybersecurity: understand the risk surface, ask the right questions of the right people, and hold management accountable for the answers.

The specific questions that matter right now:

- What AI tools are approved for use, and who approved them?

- Do employees know what the policy is, and how is compliance being measured?

- What visibility does the organization have into how AI systems are behaving, not just what they were designed to do?

- When the policy needs to change, what is the process for changing it?

At most organizations is that AI tools got turned on fast, often before formal review processes could catch up. The polling from this session made that gap visible in real time: 70% of attendees had an AI policy in place. But 67% were not performing AI discovery, and 74% were not maintaining a manifest of AI usage across their organization. **Policy without visibility is not governance. It is paperwork.**

When attendees were asked what they planned to request from their senior leaders coming out of the session, they were given five options: a full AI inventory, legal and compliance risk review, workflow visibility, workflow drift monitoring, and financial controls. The standout result: 23% selected all five. Legal and compliance risks tied at 23% as a standalone request, with workflow visibility and financial controls close behind at 15% each. That is not an audience who does not know where to start. It is one that recognizes the answers are not separable. AI governance is not a single ask. It is the whole stack.

##### **“How should we think about advanced AI capabilities and the security implications?”**

Concerns were raised during the session about the capabilities of advanced AI models currently in development, and what they mean for enterprise security posture.

The security question is straightforward, even if the answer requires work: AI is increasingly capable of identifying software vulnerabilities that have existed undetected for years, and of generating functional exploit code against them. This is not speculative. It is a capability that exists now and will expand.

This changes the math on remediation speed. Organizations that could previously manage a measured patch cycle are now facing an environment where the gap between vulnerability discovery and active exploitation is compressing. The question for security leaders, and the question boards should be asking their security teams, is not whether this will affect them. It is whether their remediation processes are built for the speed the current threat environment demands.

There is also an opportunity in the same capability. Organizations can use AI to find their own vulnerabilities before an adversary does. The boards asking these questions already understand the assignment. AI is not the risk to manage. The absence of governance is. Governance is what lets a company move at the speed AI demands, instead of being moved by it.

##### Get insights in your inbox

Weekly thought leadership and product updates from the JetStream team.

#### Explore more insights

[See all Insights](/insights)

[https://jetstream.security/insights/what-boards-are-asking-about-ai-risk/](https://jetstream.security/insights/what-boards-are-asking-about-ai-risk/)
AI Advisory

May 8, 2026

###### What Boards of Directors Are Actually Asking About AI Risk

This post is authored by Patrick E. Zeller and Keith Weisman. Patrick is General Counsel at JetStream Security and a legal and complian…

[https://jetstream.security/insights/what-boards-are-asking-about-ai-risk/](https://jetstream.security/insights/what-boards-are-asking-about-ai-risk/)

[https://jetstream.security/insights/llm-keys-are-getting-compromised-are-you-protected/](https://jetstream.security/insights/llm-keys-are-getting-compromised-are-you-protected/)
Blog

May 7, 2026

###### LLM Keys Are Being Targeted. Are You Protected?

JetStream Security is reporting a credential stealer attack that exfiltrates LLM API keys using a sophisticated AI-assisted social engineering approach. As of M…

[https://jetstream.security/insights/llm-keys-are-getting-compromised-are-you-protected/](https://jetstream.security/insights/llm-keys-are-getting-compromised-are-you-protected/)

[https://jetstream.security/insights/ephemeral-collection-edr-governance-over-persistent-agents/](https://jetstream.security/insights/ephemeral-collection-edr-governance-over-persistent-agents/)
Blog

Apr 20, 2026

###### The AI Inventory Problem Your Security Stack Wasn't Built to Solve

You can’t govern what you can’t see. Here’s why ephemeral discovery, a centralized AI Hub, and browser plugins outperform …

[https://jetstream.security/insights/ephemeral-collection-edr-governance-over-persistent-agents/](https://jetstream.security/insights/ephemeral-collection-edr-governance-over-persistent-agents/)
