Do’s and Don’ts of Using AI: A Director’s Guide

Skadden Publication / The Informed Board

Ken D. Kumayama Sonia K. Nijjar Jenness E. Parker

Key Points

  • Directors who use AI on their own for corporate purposes need to be aware of some pitfalls particular to their roles.
  • Sharing confidential corporate information with chatbots should be avoided until it has been confirmed that the AI model will not train on the material or make it available to chatbot employees.
  • AI chats may be discoverable by regulators or litigation adversaries, potentially disclosing information that could be used against the company’s interests.
  • Using AI recording and transcription tools also could reveal confidential corporate information, or render the information vulnerable to disclosure from discovery or similar requests.

At the same time that AI chatbots and tools have begun reshaping how businesses operate — including how they strategize, optimize workflows, perform R&D and distill large amounts of information — individuals, including directors, are routinely turning to the technology.

While boards are weighing the payoffs and risks of deploying the technology at their companies, individual directors also need to give thought to their own use of these tools in their corporate roles. There are potential pitfalls, some of which may not be obvious. Here’s a quick overview and tips on how to avoid missteps.The Informed Board Summer 2025 Open Book Image

Avoid uploading or inputting confidential information or personal data.

It might be tempting, for instance, to upload board materials in advance of a meeting, asking for an AI summary. But a company’s personal data, trade secrets or other confidential information should only be analyzed with AI tools that have been validated by the company’s internal IT team. Feeding confidential materials into a publicly available chatbot (whether or not free) could make the information accessible to R&D personnel or others at the AI company. And if the model trains on the confidential material, it could even be incorporated into the output for other users — potentially including the company’s competitors. (Chatbots validated by your company likely include safeguards, including confirmation that the tool will not train on the company’s inputs.)

Feeding confidential materials into a publicly available chatbot could make the information accessible to personnel at the AI company, and it could even be incorporated into the output for other users

In some cases, using AI on confidential corporate information could also violate a company’s contractual obligations, its internal policies or privacy laws.

To be safe, stick to using public, nonconfidential inputs when using public AI tools, which can still be helpful for analyzing publicly available industry trends, market data and economic indicators, or generating summaries of public financial statements or press releases.

Keep in mind that AI chats (including information you share with an AI model) may be discoverable.

Just like emails and other records, AI chats may be discoverable and could thereby end up in the hands of regulators or an adversary even if your chat history with a chatbot is no longer accessible to you; the AI vendor may still be able to produce the chat history if required to do so by a court.

For example, if a company signs a major deal to acquire a competitor, antitrust regulators reviewing the transaction could take the position that AI-generated content that exists within the files of officers or directors could be discoverable if it relates to competition or markets at issue.

AI tools should not be used to record board meetings or generate meeting minutes.

AI can be very helpful in transcribing discussions, but transcription tools may retain data, including audio recordings and generated transcripts. Given the sensitive nature of board meetings and the care that goes into drafting board minutes, third-party access to raw dialogue could pose significant legal and business risks. And, as noted above, such records may become discoverable in the case of a shareholder derivative or other lawsuit.

On a related note, avoid using third-party services to record or transcribe any communications with counsel, as this could result in the attorney-client privilege being lost if those communications are accessible to people outside the company.

On the other hand, it may be safe and helpful to use AI transcription tools to record employee training sessions, educational webinars, customer service calls and other events that don’t contain privileged information and can be useful for knowledge retention or other reasons.

AI outputs need to be verified.

By now, AI’s penchant for “hallucinating” untrue “facts” is well-known, but this point bears repeating. AI can make mistakes, get “confused,” or provide outdated, inaccurate or biased information. No one should assume that if something looks good or sounds right, it is probably correct. Review the information cited by the AI models that form the basis for their statements to confirm that the source of the information is trustworthy and the AI model correctly interpreted and synthesized the information.

Also, AI models are only as good as their training data and the context that we give them. While superb at analyzing patterns, AI may not be able to account for unprecedented market conditions, emerging regulatory requirements or individual choices. Be aware, too, of the cutoff for a model’s training material. Asking AI to evaluate an acquisition target based on historical financial metrics where the AI model only has access to data through 2023 will be likely to result in an inaccurate response.

AI augments, not replaces, human judgment.

Treat AI as a powerful tool that assists human decision-making but is not a substitute for human judgment. Chatbots are great for ideation, double-checking your thinking, and getting a second or third opinion. But do not delegate HR, strategic or other important decisions without a human “in the loop.” Doing so could violate your director’s duty of care and loyalty, and in some cases could be illegal.

Given all these possible pitfalls, boards may want to work with management to develop clear policies on using AI for board work, potentially including approved tools, acceptable uses and required disclosures.

View other articles from this issue of The Informed Board

See all the editions of The Informed Board

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP