Artificial Intelligence (“AI”) has been the subject of academic, commercial, and social interest for decades. The recent technological advancements in AI, including the production and rapid improvement of tools like ChatGPT, have emphasized AI’s influence on multiple spheres of modern life, including legal practice. This article introduces readers to AI in the legal industry and explores the ways in which AI impacts the practice of law.

Introduction

AI is the “simulation of human intelligence processes by machines, especially computer systems.”[1] AI is designed to mirror human intelligence and, as such, there are “[v]arying kinds and degrees of intelligence” that occur in computers.[2]

There are two prominent types of AI: (1) “weak” or “narrow” AI and (2) “strong” or “general” AI.[3] The former branch focuses on performing a specific task, such as answering questions based on user input or playing chess. It can perform one type of task, but not both. Thus, narrow AI is comprised of reactive machines. These machines and systems (i.e., Siri, Alexa, Tesla vehicles, ChatGPT, etc.) perform and automate repetitive tasks, and range in sophistication.

On the other hand, strong AI is a theoretical form of AI that does not exist yet, but which researchers are advancing towards. Strong AI “would require an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.”[4] Thus, reasoning and complex problem-solving are attributes of strong AI which our current technology cannot yet achieve. At present, our AI tools can process big data and has learning capabilities, in which AI learns and predicts based on historical patterns, expert input, and feedback loop.[5]

Machine Learning

Machine learning is a sub-field of artificial intelligence. Classical (“non-deep”) machine learning models require more human intervention to segment data into categories.[6] Deep learning attempts to imitate the composition of the human brain by using “neural networks.” These networks operate together to “identify patterns within a given dataset” and are more closely linked to human-level or strong AI.

AI Use in the Legal Industry

AI is currently used in transactional and litigation applications. Many of the applications widely used by attorneys—including Microsoft Office products and embedded text applications in cell phones—already use AI to provide features that most users would find familiar, such as predictive text during drafting, readability and grammar review, and productivity analytics. AI software continues to develop to provide legal professionals with tools that can make their practice more efficient and cost-effective for clients.

In transactional practice, AI is employed in many drafting and analysis tools, which aim to produce more accurate work product and streamline the drafting and revision process. These tools include:

  • Drafting Software – assembles documents by providing inputs and ask that you respond to prompts.
  • Mining and Benchmarking Tools – stores, recognizes, and evaluates contract language, identifies differences in languages, and memorializes why certain provisions are used.
  • Revision and Analysis Tools – assists with revising and refining documents, identifies duplicate provisions, ambiguous terms, or undefined terms.

In litigation, one of the most time-consuming and expensive parts of litigation involves the review and identification of relevant, privileged, or topic-specific documents contained in sizeable sets of electronically-stored information (“ESI”). AI embedded within ESI review platforms, including predictive coding and continuous active learning allow for attorneys to review smaller sets of ESI (known as the “seed” set) to teach AI to recognize review and identification patterns, which, after human review of an AI-reviewed set (the “control set”), allows the AI to review the larger ESI set without need for human review. Perhaps surprisingly, AI has been found to be equally efficient at flagging ESI for relevance and privilege as human review, particularly in large ESI sets. Because the technology is expensive to implement, it is not often used in smaller cases. However, its use in cases with significant document review requirements can be cost-effective to the client, and free up attorney time for legal strategy, analysis, and briefing.

AI has also been used in the courtroom as part of the jury selection process. Software can quickly intake potential juror information and retrieve a person’s online presence, including information available on public websites and social media. While such software can help attorneys quickly gain valuable insight on potential jurors, newer versions of this technology seek to analyze the data to offer predictions of jurors’ possible decision-making or behavioral patterns based on their publicly available information. This has raised concerns that the AI could base its conclusions regarding jurors on information that may be biased or discriminatory.

Legal research platforms have also added generative platforms to their suites of services, allowing users to interact with AI directly as they would with a customer service representative. Users can enter a question, request, or prompt into a chat, and the AI responds in a conversational manner. This feature can assist with summarizing caselaw, drafting briefs and contract clauses, and identifying helpful sources to improve research results.

Ethical Considerations of Using AI

As AI gets integrated into legal practice, lawyers must be aware of the potential ethical and confidential risks associated with the use of AI. In 2019, the American Bar Association (“ABA”) adopted Resolution 112 urging “courts and lawyers to address the emerging ethical and legal issues related to the usage of [AI] in the practice of law.”[7]  Per Rule 1.1 of the ABA Model Rules[8] , a lawyer must provide competent representation to a client, which requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. Thus, lawyers must have a basic understanding of how AI tools operate, and the risks associated with AI to comply with Rule 1.1.

Additionally, per ABA Model Rule 1.4[9], a lawyer has the duty to communicate with clients and to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.”[10] This means that a lawyer should obtain informed consent from the client before using AI. To comply with ABA Model Rule 1.5, which requires a lawyer’s fees to be reasonable, a lawyer may need to inform the client of their decision not to utilize AI. Failing to utilize AI that could reduce the cost of legal services may lead the lawyer to potentially charge an unreasonable fee to the client.

The ABA Model Rule 1.6 requires a lawyer to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[11] Therefore, before a lawyer uses AI in a client’s representation, that lawyer must ensure that any confidential client information is not shared as part of any utilization of AI.

Finally, it goes without saying that a lawyer must make sure that the work product produced by AI is accurate. Lawyers who neglect to do so may face sanctions and other consequences.  In Mata v. Avianca, Inc., lawyers used ChatGPT to generate cases, citations, and quotations for a brief without verifying whether the citations were correct.[12] None of the cases cited were legitimate.  The Court sanctioned the lawyers and their firm pursuant to Rule 11 including a $5,000 fine. In the court’s order imposing the sanctions, the judge noted that the cases cited had “legal analysis [that] is gibberish.”[13] In order to avoid legal liability for ethical violations, the lawyer must ensure that the work product produced by AI is verified and accurate.

AI Use and Potential Liability for Employment Discrimination

AI is susceptible to algorithmic bias that may result in unlawful discrimination. Algorithmic bias (or algorithmic discrimination) “occurs when automated systems contribute to unjustified treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”[14] Thus, using AI to make employment decisions may expose that entity or organization to liability for an AI’s unlawful discrimination.

Some of the most common ways that an employer’s use of AI in making hiring decisions can violate the Americans with Disabilities Act (“ADA”) are:

  • The employer does not provide a reasonable accommodation that is necessary for the applicant to be rated fairly by the algorithm.
  • The employer relies on an algorithmic decision-making tool that “screens out” an individual with a disability.
  • The employer adopts an AI decision-making tool for job applicants that violates the ADA’s restrictions on disability related inquiries and medical examinations.[15]

Therefore, to limit potential violations of the ADA, employers must ensure that the AI decision-making tools used to select job applicants do not result in intentional or unintentional employment discrimination.

Conclusion

AI will continue to make inroads into the practice of law, as it is increasingly incorporated into the tools that attorneys use every day. While attorneys should be mindful of the ethical considerations of using AI, as well as the technology’s limitations and the need for human review and verification, AI has great promise in increasing attorney efficiency, handling time-consuming or mundane tasks, and making work product more cost effective for clients.

Stay tuned for a series of articles delving into the specifics of the topics addressed here, and updates on AI in the legal industry.

Disclaimer: What is written here is for general information only and should not be taken as legal advice. If legal advice is needed, please consult an attorney.


[1] Burns, Ed, artificial intelligence (AI), TechTarget, https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence (last accessed June 30, 2023).

[2] McCarthy, John, What Is Artificial Intelligence?, Computer Science Dept., Stanford University, November 12, 2007, https://www-formal.stanford.edu/jmc/whatisai.pdf.

[3] What is strong AI?, IBM, https://www.ibm.com/topics/strong-ai (last accessed June 15, 2023).

[4] Id.

[5] van Duin, Stefan & Naser Bakhshi, Part 1: Artificial Intelligence Defined, Deloitte, March 2017, https://www2.deloitte.com/se/sv/pages/technology/articles/part1-artificial-intelligence-defined.html (last accessed August 1, 2023).

[6] Id.

[7] Resolution 112, ABA Section of Science & Technology Law, https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf, (2019) (last accessed August 1, 2023).

[8] Identical to Colorado Rule of Professional Conduct, 1.1.

[9] Identical to Colorado Rule of Professional Conduct, 1.4.

[10] David Lat, The Ethical Implications of Artificial Intelligence, Above the Law: Law2020, https://abovethelaw.com/law2020/the-ethical-implications-of-artificial-intelligence/.

[11] Identical to Colorado Rule of Professional Conduct, 1.6.

[12] Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023).

[13] Id. at *5.

[14] White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (October 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

[15] “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” U.S. Equal Employment Opportunity Commission (May 12, 2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence; AI Bill of Rights, at 24-25.