In 2025, the intersection of ethics and technology is unquestionably the current state of artificial intelligence (AI) and its application to legal practice and its impact on society. In just over two years, generative and agentic AI tools have transformed the way that millions of people discuss and interact with technology.
Generative AI tools, such as the popular ChatGPT program, use extensive large language model (LLM) training on existing data and a predictive algorithm to draft text, such as an email, legal brief, or article. Agentic AI tools are newer and take generative AI (GenAI) a step further to accomplish discrete tasks by autonomously analyzing a data set of input, using LLM output to assess steps necessary to complete the task, and then accessing software tools to act on those recommendations.
For lawyers, the consideration of AI tools is twofold: We must consider the tools available for use in our practice and weigh the risks and benefits thereof. We must also be aware of how AI technology in general will have an impact on the legal landscape in civil and criminal litigation.
The primary consideration when selecting an AI tool for legal work is that the tool complies with confidentiality requirements and that its use does not risk breaking attorney-client privilege. Another important consideration is that the attorney must oversee the output of any LLM and verify any conclusions drawn. The American Bar Association (ABA) Model Rules of Ethical Conduct regarding competence, communication, and confidentiality are all implicated by AI technology. ABA Formal Opinion 512, released on July 29, 2024, made clear that attorney use of AI tools was covered by the rules.
Rule 1.1 states that “competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” To be competent in one’s practice, an attorney must understand and possess facility of use with the tools they use in representation. Over the past two years, numerous attorneys have made ink in both case decisions and media headlines for incompetent use of GenAI tools. Too often, an LLM tool will fabricate a desired result whole cloth, with no basis in reality. If LLMs are used to assist research, the output must be thoroughly vetted and any case citations must be confirmed. Generative AI might be an early step in a competent attorney’s research, but it cannot be the final product. Formal Opinion 512 makes clear that attorneys “need not become [AI] experts” but must have a reasonable understanding of both the capabilities and limitations of the tools they choose to use in practice.
Rule 1.6 requires that an attorney maintain client and case information confidentially. One exception occurs when the client provides informed consent to the disclosure. Additionally, a lawyer must make reasonable efforts to prevent the inadvertent disclosure of, or unauthorized access to, confidential client data. The mechanism of LLMs and GPT tools involves the transmittal of data on the internet, and the security of that data is dependent on the specific tool used. Attorneys must consider whether the data they provide in a GPT inquiry is encrypted, stored for any purpose, or used for training the LLM in the future. Failure to do so is likely a breach of professional responsibility and could even lead to destruction of the attorney-client privilege in certain cases.
Rule 1.4 requires lawyers to promptly inform clients of matters that require client consent and to consult with clients regarding “the means by which” the client’s goals are accomplished. As noted in the discussion of Rule 1.6, if use of AI requires any disclosure of client data, then such use requires client consent. Opinion 512 also notes that consent must be provided if use of an AI tool will “influence a significant decision in the representation.” In those cases, clients must be made aware of a lawyer’s intention to use AI and given the opportunity to withhold consent.
Formal Opinion 512 notes that other ethical rules are implicated by use of GenAI tools, including candor toward the tribunal (rules 3.1, 3.3), supervisory responsibilities (rules 5.1, 5.3) and fees (rule 1.5). It is clear from the opinion that attorneys are responsible for the veracity of any AI-generated claims they make to the court, for the supervision and oversight of both AI tools employed in their practice and any junior attorneys using them, and for clarity in billing work in which GenAI technology was used.
ChatGPT entered the public consciousness in fall 2023, and the ABA’s Formal Opinion 512 was released less than a year later in summer 2024. By fall 2025, attorneys should have a clear idea of their responsibilities when using these tools in their practice. Now, however, we are beginning to see GenAI-generated conflicts and controversies make their way to the legal docket.
Courts now consider the question of technology company liability for the output of their models. One family is suing OpenAI, the company behind ChatGPT, alleging that the model encouraged their son’s suicide. There are also core questions of copyright infringement being raised in court about the training of these models. Even more concerning, the ability to easily produce GenAI audio, video, and visual output raises foundational evidentiary questions regarding the veracity of alleged recordings. Even when evidence can be shown to be legitimate, questions regarding the potential for evidence to be “deepfaked” (the term given to realistic visual or audio fabricated by AI) will linger in the public’s mind.
Attorneys are successfully adapting to the widespread use of AI tools in legal practice, but the age of AI is still in its infancy. While we have incorporated the permissible use of AI into our ethical framework, we still must work to understand how these tools are being deployed in the world, how they will affect human behavior, and how they will interact with both existing laws and drive new legislation.
It's important that attorneys maintain their education on both the use of GenAI tools and technological developments in the field. One option for continuing education is the new national AI Virtual American Inn of Court. The Inn is open to attorneys across the country and meets remotely online to discuss developments in AI and the law. For more information or to join, please visit the AI Virtual Inn's website.