Juliana Neelbauer, Partner of Fox Rothschild LLP with practices in Corporate, Intellectual Property Licensing, Data Privacy and Cybersecurity law
*Disclaimer: The following is not legal or tax advice or a prediction of future matters or law. It is a survey of some business issues and tax and legal considerations regarding AI usage in transactions. For specific tax or legal advice related to your clients’ business and accounting concerns and needs, please engage and discuss AI issues with your accountant and tax attorney.
AI Cannot Replace Human Judgment or Trust
One of the most pressing questions regarding artificial intelligence (AI) is: Are we all losing our jobs? The answer is both somewhat in that our roles and tasks will change, as they did with the rise of fax machines, word processors, e-signatures, and email in the past. A more pressing question for accounting and business leaders is “Are client transactions at risk due to generative AI usage?” The answer is “yes.” This is because AI platforms and tools are not replacements for human judgment or trust, nor are they insurance policies against transactional and asset risks. In many ways, jobs and business operations will evolve and adapt to these tools and integrate them into delivery pipelines, which in the immediate term, will require human supervision as an insurance policy on outputs and compliance.
Every organization must ask the question, “Should we be using this technology?” For example, law firms may be allowed to use ChatGPT and other AI tools to create initial drafts for marketing communications or master templates, but they may not be allowed to input confidential information, or even specific client information that is publicly accessible, including transaction terms, intellectual property elements, or market research, since most of the public generative AI platforms state plainly in their terms of use that inputted information is stored, structured, and then used to train the AI tool for other future outputs.
Possible Uses of AI in Business
- Initial Drafts – The initial creation of drafts or outlines based on data input.
- Dictation and Meeting Minutes – Plugin technology is already available for many video conferencing tools.
- Converting Data Formats – For example, you can convert data in an Excel document into a report very quickly for stakeholders upon request.
- Unidirectional Outputs – This is when we provide AI with a prompt, ask it to solve a certain problem or complete a complex task, and then we receive an output based on the instructions of the prompt.
Mechanics of Generative AI
- Data Collection – AI tools rely on collecting as much data as possible to improve the quality of their outputs, making them (theoretically) smarter through each deployment.
- Programming – Once data and algorithms are in place, developers may fine-tune programming by adding additional controls, rules, bounds, limiters, refining code base, etc., as they test different outputs from those initial testing stages.
- Training – This requires taking all of the collected data sets both in the initial development and programming phases, and then putting them all together to create the master corpus from which an initial launched tool, and then later a legacy tool, will be utilized to keep building and feeding the data set, which will train itself to become smarter.
- Testing and Quality Assurance – AI tools must constantly be tested for the quality assurance of their products at a higher level and intensity because of the nature of AI technology. ESG can also have a larger impact on testing and quality assurance of AI tools.
- Delivery – Generative AI tools can be offered to the general public as a service with limitations on licensing in outputs to use and reuse based upon the terms of use and training data license compliance.
- Licensing and Assignment – This includes the “nuts and bolts” of what is licensed and assigned to customers, end users, joint ventures and collaborators.
AI Legal Landscape
The European Union (EU) is pro-active in attempting to lead the way with an Omnibus Comprehensive Law that addresses how business owners and governments deploy AI. The EU released its Regulation on Artificial Intelligence in April 2021 that is still under review. In 2022, the European Commission also released the Artificial Intelligence Liability Directive in March 2022, which means that any individual, employee, customer or vendor that believes an organization is in violation of the Artificial Intelligence Act or any other AI-related regulation in the EU, they would be subject to or at risk of having a private claim filed.
In the United States, the following are important to note regarding the AI legal landscape:
- Copyright Author Status Laws – Focusing on the status of authors and their rights.
- Blueprint for an AI Bill of Rights, U.S. President’s Office, 2023
- Automated Employment Decision Tools (AEDT), New York City, Dec. 2021
- Virginia Consumer Data Privacy Act (VCDPA), 2021
- Copyright Office approval letter of Midjourney’s Zarya of the Dawn, Feb. 22, 2023. This approval did not provide AI with authorship status of the graphic novel, but rather a thin copyright for the human and what the human contributed as an editor of the inputs and outputs of the generative AI platform as a tool.
- Copyright Infringement – AI creates outputs using pools of data that include the work of other people, creating questions around copyright infringement.
- Andersen et al v. Stability AI Ltd. et al: Class action against Stability AI, Midjourney and DeviantArt Inc. for using copyrighted images in data lakes that train generative AI tools.
- Fair Use Limits for U.S. copyright protecting rights. This could be the case for trade dress or even trademarks.
- C.C. Article 12 – Proposed and under review by state legislators and regulators in the U.S., this law, once adopted, governs transactions in a subset of digital assets called “controllable electronic records.” This is currently adopted by 11 states: Alabama, California, Colorado, Delaware, Hawaii, Indiana, Nevada, New Hampshire, New Mexico, North Dakota and Washington.
- Revised U.C.C. Article 9 – Proposed and under review by state legislators and regulators in the U.S., this law clarifies how a secured party perfects a security interest in digital assets and ensures that it has priority. Most states have adopted Article 9, and the update for digital collaboration has been adopted by Alabama, Colorado, Delaware, Hawaii, Indiana, Nevada, New Hampshire, New Mexico, North Dakota and Washington.
AI in Contracts – How Will They Change?
- Intellectual Property Considerations
- Both parties:
- Confirm the licensing status of all data used for the initial training.
- Confirm the licensing and assignment provisions in terms of service and use.
- AI-using party:
- Train human authors, artists, engineers and inventors regarding how to incorporate AI into their human work to maximize human authorship (Midjourney).
- Deliver assessment to customer regarding scope of assignable IP.
- Remove the “best efforts” language in IP provisions for supporting registrations.
- Party receiving AI delivery:
- Confirm expectations of registrability up front. Require support for registration in roadmap. Understand limitations for registration of AI-generated works.
- Confidentiality Considerations
- AI-using party: Train AI prompt engineers and AI-management staff on how to avoid using this information, creating a private data lake and local AI instance, or how to anonymize the data. Declare what protected data will be used and how, and how it will be stored.
- Party receiving AI delivery: Conflict check any AI apps that are used; require confidentiality protection for AI training any prompting.
- Misrepresentation / Reps and Warranties Considerations
- Include an AI disclosure in the representations and warranties section or the performance sections.
- Note compliance with applicable disclosure requirements and follow those parameters in drafting.
- Key Man Considerations
- AI-using party: Acknowledge the humans who are key to the performance and retraining/monitoring of the AI, but avoid AI listed as a key man and list a “key man” role rather than individual whenever possible.
- Party receiving AI-delivery: Tie responsibility to the human, if only one exists in the jurisdiction with the AI technical competence for the industry, or to the particular app preferred.
- Breach Materiality and Curing Considerations
- AI-using party:
- Ensure that reporting and log disclosure of AI processes and decision trees is available to the other party and regulators.
- Attempt to segment the AI responsibility to non-material terms via definitions.
- Adjust cure periods and methods to accommodate AI retraining or debugging.
- Party receiving AI delivery: Require transparency of AI processes and AI decisioning reports upon delivery.
- AI-using party:
- Payments Considerations
- AI-using party: Include related cure periods for technical issues or human disputes of AI payments. Soften materiality of failure to pay due to AI error or improper amount delivery. Make the evaluation process transparent regarding human vs. computer responsibilities.
- Indemnification Considerations
- AI-using party: Include indemnification caps or exclusions for certain types of foreseeable, but assumed, risks of
AI use. - Party receiving AI delivery: Remove caps or exclusions for certain foreseeable, but unassumed, risks of AI use.
- AI-using party: Include indemnification caps or exclusions for certain types of foreseeable, but assumed, risks of
- Limitation of Liability Considerations
- AI-using party: Adjust limitation of liability to protect against responsibility for mutually assumed risks of AI use, where appropriate.
- Party receiving AI delivery: Adjust limitation of liability to protect against foreseeable, but unassumed risks of AI use.
- Non-Solicitation Considerations
- Add an AI exclusion into exceptions for non-solicitation by public posting.
- Foresee the foreseeable! …and adjust AI tool prompts accordingly.
- Force Majeure Considerations
- Service Providers: Edit to be inclusive of AI issues.
- Customers: Explicitly exclude AI-related errors or service interruptions.
- Both parties:
