Briefpoint

ABA’s Guidance on Generative AI: Client Consent

 In Practice Pointers

ABA's Generative AI Opinion - Client Consent

The ABA just released its formal opinion on AI, which is a must-read (the full text of the opinion is linked here).

While most of the guidance can be followed by combining your understanding of the rules of ethics with a passable understanding of LLM functionality, the ABA provided express dictates governing when you must get consent from your client before using AI. 

The following summary goes over three of what are the most common situations that mandate client consent and the overarching balancing rule established by the ABA. 

Self-learning AI 

“Self-learning” AI is AI that passively collects data and uses that data to improve its function for the end user and/or all users. 

For our purposes, there are two types of self-learning AI: AI systems that collect user data and use it to train the model for all users (“Global”), and systems that collect user data and use it to train the model for only that user (“Siloed”).

Critically, the ABA explicitly states that a firm must affirmatively obtain a client’s consent to use both Global and Siloed self-learning systems. 

How do you know if you’re using a self-learning AI system?

Look out for AI-marketing language that includes words or phrases like:

  • Unlock the value of your firm’s work product
  • Improves or gets better with use
  • Learns to ______
  • Trained using your ______
  • Leverages your or your firm’s work product
  • Drafts like you or in your writing style
  • Customized or tailored AI processes

To avoid this, look for companies that train their AI on publicly available documents (e.g., Briefpoint). 

Significant AI-Influenced Decisions

As the ABA states, client consent is necessary when the AI’s output “will influence a significant decision in the representation.”

The ABA’s stated examples of what constitutes a “significant decision” include litigation outcome and jury selection analysis. 

The test for what constitutes a “significant decision” is any decision that a client “would reasonably want to know whether . . . the lawyer is exercising independent judgment or . . . is deferring to the output of a [generative AI] tool.”

Other examples I might add include things like witness selection, settlement analysis/valuation, jury waiver, and whether to appeal.

Decisions that are likely less-than-significant in this context might include things like which citation to use in support of a discrete position, what objections to lodge against a discovery request, and the exact phrasing of the questions you intend to ask on cross-examination. 

These are things that you would generally not discuss with your client anyway and, hence, a good test for whether you need AI consent is whether AI influenced any strategic opinion you would normally discuss with your client. 

Attorney Specialization

Broadening the latter mandate, the final consent guidance dictates that consent must be attained “where a client retains a lawyer based on the lawyer’s particular skill and judgment, when the use of of a [generative AI] tool . . . would violate the . . . client’s reasonable expectations regarding how the lawyer intends to accomplish the objectives of the representation.” 

While the ABA did not provide any hypotheticals where this applies, it seems like the ABA intends to dictate that an attorney hired for a specialized purpose cannot (without consent) substitute their expertise with AI output – regardless of context. 

For example, if an attorney who specializes in aviation is hired to write a discrete opinion on FAA regulations because that attorney is a specialist, the attorney would need consent from the client before basing his/her opinion on an AI output.

Like most of the ABA’s guidance, this is more common sense than anything else – if someone hires you to do a job because you’re great at that job, secretly hiring someone else do the work would likely violate your client’s expectations. 

Balancing Test for Client Consent, Generally

The ABA admits that, even with this guidance, anticipating every scenario wherein client consent is required before using AI is impossible. 

Accordingly, the ABA recommends that any lawyer considering whether their AI use necessitates client consent weigh (1) the client’s needs and expectations, (2) the scope of the representation, and (3) the sensitivity of the information involved. 

In making these considerations, the ABA further recommends that the lawyer consider: (1) the AI tool’s importance to a particular task, (2) the significance of that task to the overall representation, (3) how the AI tool will process the client’s information, and (4) the extent to which knowledge of the lawyer’s use of the AI tool would affect the client’s evaluation of or confidence in the lawyer’s work. 

Final Thoughts 

While the ABA’s opinion adds much-needed clarity to ethical AI use, most of its guidance can nevertheless be adhered to by using a basic understanding of how generative AI works and common sense ethical practices. 

That “basic understanding of how generative AI works” is in fact stressed by the ABA in its opinion as a requirement for attorneys using generative AI. 

That is also the very thing I teach in my free MCLE on ethical usage of AI. If you would like me to present the course to your firm, please schedule a time to meet using the following calendar: Setup Ethical AI Use MCL

Recent Posts

Leave a Comment