Generative AI: Practical Considerations for Companies and Boards

June 4, 2025

The AI revolution is creating both extraordinary opportunities and potentially far-ranging and novel risks for U.S. companies.

This article provides an overview of the current AI legal landscape, summarizes key risks for AI adoption and implementation, discusses the roles of boards and senior leaders in overseeing AI adoption and deployment, and provides key takeaways for navigating and mitigating these risks.

I. Introduction

In 2025, the question is no longer whether an organization should use artificial intelligence (“AI”),[1] but where and how to use it to maximize its utility in the service of its business, employees, and customers. This past year saw a shift from general contemplation of AI use to deployment and value generation.[2] AI continues to revolutionize business in big and small ways and is rapidly evolving. Its potential use cases are far-ranging, from drug development, environmental impact analysis, and precision farming, to supply chain and human resource management and software development. AI in marketing is an area of fast-paced innovation, including content development and optimizing target audience. AI-powered chatbots are exploding in popularity. At any depth of deployment, companies and boards need to be aware of key risks AI poses and areas of uncertainty in the laws governing its use. This article provides an overview of the current AI legal landscape, and then discusses the roles of boards and senior leaders in overseeing AI adoption.

II. Risks for AI Adoption and Implementation

A. Copyright

When considering legal risks presented by AI, many people think first of copyright concerns. Copyright may vest in any original work of authorship fixed in a tangible medium of expression. Copyright conveys upon the owner certain exclusive rights, including the right to prevent others from reproducing, displaying or performing the work, or making derivative works, without authorization. Two primary copyright concerns arise in the context of generative AI: infringement and authorship. We discuss each below, along with measures companies should consider taking to mitigate the associated risks.

1. Infringement Risks

When AI tools generate content, it is not created out of thin air. AI developers begin by “pretraining” models on vast amounts of input data (such as text, images, and audio and video files) to extract statistical information and develop algorithms based on the probabilistic distribution of that data, asking the model to answer text- or voice- based queries or prompts, then providing feedback on the results the model produced to fine-tune it for specific capabilities or use cases or to avoid unwanted outputs. When fully trained in this manner, AI tools are capable of generating a striking array of never-before-seen output that can be put to practically limitless new uses. However, copyright infringement remains a legal risk both for AI developers training and building models, and for the companies who deploy them in service of their business.

Numerous individuals and companies — including book authors, artists, news and media companies, and music publishers — have filed lawsuits asserting that AI developers improperly made copies of their copyrighted work to train generative AI models without permission and that this constitutes copyright infringement. There are now upwards of 30 such cases in the United States, largely centered in the Northern District of California and the Southern District of New York.[3] Currently, these claims are directed almost exclusively at AI developers, rather than companies that deploy and implement models trained by others. These claims are in active litigation and will likely turn on the question of whether making copies of copyrighted works to train an AI model constitutes “fair use.” Fair use is a statutory exception to infringement that allows “transformative” uses of copyrighted works to create new content that serves a different purpose or function, rather than merely usurping the market for the original by reproducing it. The fair-use analysis is factually nuanced and depends on a number of factors that must be considered holistically. No court has yet decided whether training a generative AI model on copyrighted content constitutes fair use, though there are a number of analogous scenarios in which fair use has been found. We expect that the first summary judgment motion to address fair use in training will be heard in May 2025, with other cases to follow and the first decisions expected in late spring or early summer.

Another theory of copyright infringement advanced in some of the pending cases[4] is that the outputs from the AI tools allegedly infringe plaintiffs’ works because they replicate protected content from copyrighted materials on which they were trained. Under U.S. copyright law, outputs must be “substantially similar” in protected expression to the training materials in order to be infringing, but this can sometimes be difficult to discern from the face of an output. Further, the fair-use defense may apply even to substantially similar outputs, depending on how they are used in context. News reporting, scholarship, criticism, and commentary are classic examples of fair use, even where the content of another is used without permission.

Companies deploying third-party AI tools should be aware of, and take steps to protect themselves from, copyright-related risks. Appropriate mitigations include:

  1. Understand and vet the capabilities and limitations of any AI tools contemplated for use.
  2. Maintain a policy covering who may use generative AI tools within the company, and for what purpose. Generation of certain content for internal purposes may be lower risk than generation of content for consumer-facing promotional materials or incorporation in public-facing products or software.
  3. Track employee use to ensure compliance with company policy. Employees may inadvertently agree to terms of service that bind the company.
  4. Be aware of any indemnification being offered by the developer against third-party claims of copyright infringement, and take steps to conform use to any requirements for indemnification to apply. E.g., some indemnification provisions require that any protective features offered to avoid reproduction of copyrighted content be activated.
  5. Subject AI-generated content to the same level of scrutiny and review as human-generated content before it is published or used by the company. Indemnification will usually not be available where a tool was used for the purpose of generating infringing content, where unauthorized third-party copyrighted content was used to prompt the tool, or where the company has reason to know that the output implicates the intellectual property rights of others.

 

2. Copyrightability and Ownership Risks

There are also significant open questions as to who, if anyone, owns content generated using generative AI tools. The U.S. Copyright Act provides that ownership of the copyright in an original work of authorship vests in the “author,” but it does not define this term.[5] The U.S. Copyright Office (“USCO”) has repeatedly pronounced that, “To qualify as a work of ‘authorship’ a work must be created by a human being.”[6] At present, the USCO has taken the position that content generated by prompting generative AI tools is, in most instances, not entitled to copyright protection or registration because of the lack of a human author.[7] Practically speaking, this means that neither the user of the AI platform who inputs the prompt, nor the developer of the AI tool, can copyright any content output by a generative AI model. That content may be considered, at least for now, in the “public domain,” and can be used or copied by anyone. However, federal courts will have the final say on this issue in the U.S., and it is actively being litigated by human artists who claim they should be entitled to own the copyright in AI-augmented outputs they spend considerable time and creative energy to produce.[8]

The lack of clarity around ownership of AI outputs creates a meaningful risk. Although the terms of service of most AI platforms disclaim ownership of outputs and provide that the user will own any copyright interest therein, copyright registration (which is generally a prerequisite to filing an infringement lawsuit) is currently unavailable. It is uncertain whether and under what circumstances courts will ultimately find that a human user of an AI tool may claim copyright ownership over outputs generated. As a result, companies must assume for present purposes that any outputs they or their employees create will not be afforded protection under copyright from use or reproduction by others and take appropriate steps to protect against this risk.

Another issue of copyright ownership surrounds the use of AI platforms trained on open-source code. Use of such code is often governed by permissive licenses that allow largely unfettered use so long as any software incorporating that code is also made available for use by others on similar terms. When a company uses generative AI tools trained on open-source code, it raises a risk that the resulting outputted code may contain snippets of open-source content that may inadvertently subject the user’s own codebase to open-source requirements under the terms of the applicable license.

Companies should be aware of and take steps to mitigate both of these risks. Mitigation strategies may include:

  1. Consider what outputs will be generated and how they will be used. Does the company need to protect the output under copyright (e.g., will the output code be included in consumer-facing software products; will the company be harmed by the use of the output by others)? If so, consider whether the company should be using generative AI tools under the circumstances.
  2. Can the output be protected through some other means, e.g., by keeping it confidential and internal to the company?
  3. Implement, distribute, and oversee compliance with a policy regarding which uses of generative AI are acceptable and not acceptable in light of the associated IP risks and under what circumstances.
  4. If the company intends to seek copyright protection for a particular work, keep well-documented records of which components are human-created and which are AI-generated. The USCO may issue a registration covering the human-created portions, or their overall composition, selection, or arrangement, even if specific AI-generated components (e.g., images or text snippets) cannot be protected. The USCO also requires anyone seeking a copyright registration to identify and disclaim AI-generated content under penalty of forfeiture of protection for the work as a whole.[9]
  5. Consider registering any underlying human-created content that may be fed into an AI tool as part of a prompt. Even if copyright protection is not available for the AI-generated portions, a registration for the underlying work may be sufficient to prevent copying if the AI-augmented output is substantially similar.
  6. Consider registering the copyright in complex prompts that rise to the level of a copyrightable, original work of authorship. Again, even if the output cannot be copyrighted at this time, such a registration may prevent others from copying and using the prompt to generate a similar work.

 

B. Antitrust

In the antitrust space, private plaintiffs have brought cases alleging that multiple companies who use the same commercial pricing AI tools have violated Section 1 of the Sherman Antitrust Act,[10] which prohibits agreements to restrain trade. Plaintiffs have alleged that the AI platform operates as a central coordinating party that enables all of the companies using the algorithm to coordinate on pricing. Most cases rely on the theory that companies share competitively sensitive information with the AI which it then uses to suggest prices to all participants. In public statements, the Federal Trade Commission has likened the use of pricing AI to a group of companies all designating their pricing authority to the same individual who then sets prices for the entire market, except that the individual here is an AI tool.

If a company decides to use pricing AI, it should consider contractual agreements with the developer to ensure the AI is only relying on the company’s own data to make recommendations. The company’s data should be siloed from other users and not shared with anyone else. As an additional risk measure, a company could also request that its data not be used to train the developer’s AI pricing algorithm. That way, a company’s confidential data remains entirely separated from that of its competitors.

Beyond coordination via pricing, AI in 2024 and 2025 has become a broader area of concern for global antitrust agencies, with agencies preparing reports and opening cases in reviewing mergers. While antitrust agencies acknowledge the potential for AI to boost innovation and economic growth, they see risks arising from the technological inflection point we are now at, and are determined to “ensure that that public reaps the full benefits of these moments.”[11] The overarching antitrust concerns with AI are best summarized in a joint statement of the major agencies across the US, EU, and UK published in July 2024:[12]

  • Concentrated control of key inputs: the agencies identified specialized chips, compute capacity, data at scale, and talent as “critical ingredients” to develop foundation models. They noted that “concentrated control of key inputs . . . could potentially put a small number of companies in a position to exploit existing or emerging bottlenecks across the AI stack and to have outsized influence over the future development of these tools.”
  • Entrenching or extending market power in AI related markets: the agencies raised a concern that incumbent firms might leverage strong positions in existing business or consumer-facing services to give competitive advantages to their own AI models.
  • Partnerships: the agencies also noted that partnerships and acquisitions between AI companies could create, reinforce, or extend positions of market power either across the foundation model value chain or in downstream markets. They recognized that “in some cases, these arrangements may not harm competition but in other cases these partnerships and investments could be used by major firms to undermine or co-opt competitive threats and steer market outcomes in their favor at the expense of the public.”

Companies should be aware from an early stage of the antitrust concerns around AI, and consider building into their governance framework the three core principles promulgated by the US, EU, and UK agencies of fair dealing, interoperability, and choice.[13]

C. AI Decision-Making: Bias and Error

AI use in decision-making creates risks analogous to those in human decision-making. For example, AI use in hiring processes has led plaintiffs to bring employment discrimination cases alleging that AI hiring tools discriminate on the basis of race, age, and disability. The idea of “AI bias” is perhaps counter intuitive, but research has shown that AI can develop a bias from its training data, prompt inputs, or the coding of the algorithm. AI bias can exist despite the diligent efforts of developers to make the algorithm objective due to subconscious cognitive biases, as well as unevenness of data upon which systems can be trained.[14]

If algorithms start to become more deeply integrated into decision-making processes such as hiring, school admissions, lending, and other financial transactions, or even state-sanctioned penalties, we are likely to see more cases alleging that the AI is acting on biases that it adopted through programming and machine learning. This is not to say that the promise of AI should be avoided in such functions. Rather, it should be adopted with disciplined risk mitigation in focus at the outset.

Beyond reputational risk, bias and error can result in significant legal and enforcement risk, particularly for highly regulated industries. For example, while noting the benefits and efficiencies of well-managed AI tools, both the Federal Reserve[15] and the Consumer Financial Protection Bureau[16] have recently warned banks and lenders about potential bias in AI that could lead to violations of fair lending, fair housing, and equal opportunity laws. Similarly, the SEC has proposed substantive rules related to the use of (and potential conflicts of interest associated with using) predictive data analytics in connection with products and services offered by investment advisers and broker-dealers.[17]

Boards and senior leadership should take stock of where AI is used in their company’s business and evaluate the risk of AI bias in each area. As with managing potential bias when not using AI, the best approach to managing bias in AI is to have a diverse, cross-functional group of people involved in decision making processes along with proper oversight and reporting mechanisms to catch any instances of bias. For example, if AI is being used in hiring, companies should ensure that AI is not the only decision-maker in the process. A human should remain involved in, and provide proper oversight over, hiring decisions. Proper oversight and a diverse set of decisions-makers help to ensure that inadvertent biases are identified and avoided.

D. Misrepresentation and Fraud

The SEC has settled its first AI fraud case against two investment advisors that falsely claimed to be using AI tools and predicative algorithms to provide investment advice.[18] Unlike other cases involving AI discussed in this article above, the defendants in this case, Delphia (USA) Inc. and Global Predictions Inc., faced liability for not using AI (to the extent, and in the manner, they claimed) rather than using it improperly. SEC Chair Gary Gensler has said that the SEC is committed to protecting investors from companies falsely claiming to use AI as a means of enticing business — what Gensler calls “AI Washing.” Id.

Private investors have also initiated lawsuits regarding companies’ claims about their AI’s sophistication and competitive advantages. In Upstart Holdings, the court denied a motion to dismiss against Upstart, a “cloud-based AI lending platform.”[19] The investors brought claims under the federal securities laws alleging that Upstart misrepresented the “significant advantage” of its AI model over traditional FICO-based lending models, and its ability to dynamically respond to macroeconomic changes.[20] Similarly, in Jaeger v. Zillow Group., Inc., the court denied, in part, a motion to dismiss against the real estate website Zillow.[21] In Jaeger, plaintiff investors brought class action claims under the federal securities laws alleging that defendants made misleading statements about Zillow’s AI tool for forecasting home prices by concealing that the company was also using non automated pricing overlays and creating the misleading impression that Zillow was working to improve its automation technology.[22]

Companies should vigilantly monitor for any form of fraud in their business, and potential misrepresentations involving AI are no different. With the advent of a new and exciting technology, companies will be eager to integrate AI into their products and service offerings, and advertise its use to consumers. Not only should companies take measures to ensure that their own companies are not improperly AI washing, but they should be aware of other companies that may be doing so. A company could be accused of AI washing if it advertises using a third-party’s AI, but the third party is not actually using AI or has misrepresented its use. As AI becomes increasingly popular and in-demand, ensuring adoption and deployment of a robust set of controls to guard against fraud in the representation of AI use is essential to minimize risk of regulatory enforcement and private securities claims alike.

III. The Board’s Role in Managing AI

A. The Board’s Oversight Duty

Boards have a duty to adequately oversee corporate activity, including key risks.[23] Ideally, as AI-use cases are evaluated by management teams, under board oversight, both rewards and risks associated with AI-use strategies should be analyzed. Boards should be thorough in documenting their consideration and oversight of these opportunities and the corresponding risks — while latitude is given to companies exercising business judgment in good faith, it can be more challenging to defend decision-making when the paper record does not reflect all of the care taken by leadership. A board’s satisfaction of its oversight obligation under Delaware law and other jurisdictions could come into question when AI adoption is not accompanied by robust risk mitigation; for example, if employees leverage AI without formalized-use policies or monitoring mechanisms in place.

Only some board and management teams, particularly outside of the tech sector, currently have meaningful in house AI expertise or infrastructure.[24] Given the power of AI, employee use can easily become misuse without well-developed policies and procedures, as well as compliance monitoring. Additional staffing may be needed, though specific requirements for use policies and senior-level AI expertise, whether in the boardroom or in the C-suite, likely will vary with how important AI is to the central mission of the business and how deeply embedded it is likely to become.

B. Effective AI Implementation

The efficiencies that AI adoption promises also create a risk of over-reliance that could be irreversible if the integration is not carefully managed with Board oversight.

  • Knowledge Gap — When a company implements AI to streamline operations, AI-related workforce reductions or innovations could create a situation where few employees know how a particular process works. Employees who are let go may possess institutional knowledge about workstreams that AI systems may not fully replicate or understand.
  • Misinformation Reliance — AI may generate or infer facts that result in false outputs known as “hallucinations.” This has been exemplified by several recent high-profile instances of AI from major developers making incorrect claims during public demonstrations and lawyers citing non-existent precedent provided to them by AI. AI generated data with errors as a result of hallucinations could pollute otherwise accurate data without detection. This risk may compound over time as AI-generated data is used to train other AIs.
  • Decision-Making — Reliance on generative AI without understanding its limitations could result in faulty decisions as a result of limited or misunderstood decision-making criteria that would not be made under normal circumstances.
  • Third-Party Reliance — Corporations that are not developing AI capabilities wholly in-house are subject to the risks posed by relying on a third-party provider. Leaders should be cautious in the event the relationship sours.

To counteract AI reliance risk, corporations should maintain highly skilled workers who mitigate knowledge gaps and monitor for AI limitations. Such highly skilled employees should be central to AI integration. AI should be a partner to subject-matter experts and data analysts, not a replacement.

IV. Key Takeaways

  • AI is rich in promise, but should be adopted with risk mitigation in mind from the outset to maximize value and minimize unforeseen liability.
  • Senior leaders should be involved in AI selection and adoption, and boards should be involved in its oversight, as AI poses key risks in addition to great benefits.
  • AI should not replace subject-matter experts and data analysts, but instead should be integrated with their roles to protect against over-reliance risks.
  • Whether or not a company is currently adopting AI based capabilities, it still faces strategic business risks associated with the AI revolution, and all companies should prioritize mitigating these potentially far-ranging and novel risks.

This article was originally published in The Review of Securities & Commodities Regulation.


[1] This article focuses on generative AI, including tools built using large language models (“LLMs”) and diffusion models. Generative AI models are trained on vast quantities of data from which they derive complex algorithms that allow them to understand language, process user text or voice prompts (or “inputs”), and generate “outputs” in the form of images, text, video, audio, etc.

[2] According to McKinsey & Company’s 2024 annual survey, in just over 10 months from the 2023 survey, the respondents who reported regular use of AI nearly doubled to 65%. McKinsey & Company “The State of AI in Early 2024: Gen AI adoption spikes and starts to generate value” (May 30, 2024), available at https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year.

[3] Some of the leading cases include Andersen et al v. Stability AI et al., 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023) (putative class action in which visual artists seek to challenge training of competing image models); Zhang et al v. Google, LLC., 5:24 cv-02531 (N.D. Cal. Apr. 26, 2024) (proposed class of book authors challenging training of Google’s Bard/Gemini models); Tremblay v. OpenAI et al., 3:23-cv-03223 (N.D. Cal. June 28, 2023) (proposed class of book authors challenging training of ChatGPT); Kadrey et al. v. Meta Platforms, Inc., 3:23-cv-03417 (N.D. Cal. July 7, 2023) (proposed class of book authors challenging training of Meta’s Llama LLMs); Concord Music Group v. Anthropic PBC, 5:24-cv-03811 (M.D. Tenn. Oct. 18, 2023) (music publishers challenging training of Anthropic’s Claude model on copyrighted song lyrics).

[4] Most cases challenging outputs have been filed by individual content owners as direct claims, rather than proposed class actions. See, e.g., Thomson Reuters Enterprise Center GmbH v. ROSS Intelligence Inc., 1:20-cv 00613 (D. Del. May 6, 2020) (action against legal research startup ROSS Intelligence over its natural language search engine AI model, alleging that ROSS used protected Westlaw headnotes to train its model); Getty Images (US), Inc. v. StabilityAI, Inc., 1:23-cv-00135 (D. Del. Feb. 3, 2023) (stock image site, Getty Images, sues Stability AI, asserting claims of copyright and trademark infringement based on allegations that Stability AI “scraped” Getty’s website for images and data used in the training of its image-generating model and that the model generates infringing outputs); New York Times v. Microsoft Corporation et al., 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023) (individual action against OpenAI entities and Microsoft alleging that defendants used protected work for training ChatGPT and that ChatGPT generates infringing outputs).

[5] 17 USC § 201.

[6] U.S. Copyright Compendium (3rd ed. 2017); see also Naruto v. Slater, 888 F.3d 418, 420 (9th Cir. 2018) (“[W]e conclude that this monkey — and all animals, since they are not human — lacks statutory standing under the Copyright Act.”).

[7] Thaler v. Perlmutter, No. 23-5233, 2025 WL 839178, at *4 (D.C. Cir. Mar. 18, 2025). (affirming lower court’s decision that a work generated by an AI model lacked human authorship); see also U.S. Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (Jan. 27, 2025).

[8] See generally Allen v. Perlmutter, No. 1:24-cv-02665 (D. Colo. 2024); see also Katelyn Chedraoui, This Company Got a Copyright for an Image Made Entirely With AI. Here’s How, CNET (Feb. 10, 2025).

[9] Copyright Registration Guidance for Works Containing AI Generated Materials, 88 Fed. Reg. 16, 190 (Mar. 16, 2023).

[10] See, e.g., Gibson et al. v. Cendyn Grp., Case 2:23-cv-00140 (D. Nev. May 8, 2024) (action for Sherman Act Section 1 violations against software company that provides an algorithmic price-setting software and hotel that used that software).

[11] Joint Statement of Competition in Generative AI Foundation Models by EU Commission, UK Competition & Markets Authority, U.S. Department of Justice, and U.S. Federal Trade Commission, 23 July 2024 (the Joint Statement).

[12] Ibid.

[13] Ibid.

[14] IBM “Shedding light on AI bias with real world examples” (October 16, 2023), available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm.

[15] See, e.g., Michael S. Barr, Vice Chair for Supervision, Board of Governors of the Federal Reserve System, Furthering the Vision of the Fair Housing Act, Speech at “Fair Housing at 55 — Advancing a Blueprint for Equity”, National Fair Housing Alliance 2023 National Conference, Washington, D.C. (July 18, 2023), available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm.

[16] See, e.g., CFPB “CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence”(September 19, 2023), available at https://www.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-credit-denials-by-lenders-using-artificial-intelligence.

[17] SEC Press Release “SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers” (July 26, 2023), available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm.

[18] SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence, The Securities and Exchange Commission (March 18, 2024), https://www.sec.gov/newsroom/press-releases/2024-36 (last visited Sept. 14, 2024).

[19] In re Upstart Holdings, Inc. Sec. Litig., No. 2:22-CV-02935, 2023 WL 6379810 at *2 (S.D. Ohio Sept. 29, 2023).

[20] Id. at *13.

[21] Jaeger v. Zillow Grp., Inc., 644 F. Supp. 3d (W.D. Wash. 2022).

[22] Id. at 871.

[23] Caremark claims have seen a recent surge, including the expansion of the duty of oversight to include officers. For more information on the expansion of Caremark duties, see Cleary Gottlieb Steen & Hamilton LLP “Delaware Courts Beef Up Caremark Claims Involving Corporate Misconduct While Leaving Hot-Button Political and ESG Issues to the Boardroom” (January 17, 2024), available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm.

[24] In a KPMG study, 53% of respondents cited a lack of appropriately skilled resources as the leading factor limiting their ability to review AI-related risks. KMPG “Responsible AI and the challenge of AI risk” (2023) available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm. In McKinsey’s 2023 annual survey, just 21% of adopters said their organizations have established policies governing employees’ use of AI. See McKinsey & Company “The State of AI in 2023: Generative AI’s breakout year” (August 1, 2023), available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm. At present, 68% of executives surveyed by Deloitte reported a moderate-to extreme AI skills gap. See Deloitte Center for Technology, Media & Telecommunications “Talent and workforce effects in the age of AI: Insights from Deloitte’s State of AI in the Enterprise, 2nd Edition survey” available at https://www.federalreserve.gov/newsevents/speech/barr20230718a.htm.