The American Bar Association’s House of Delegates, its policy-making body, voted this week to approve a resolution urging courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence in the practice of law.

Among the AI-related issues the profession should address, the ABA said, are bias, explainability, and transparency of automated decisions made by AI; ethical and beneficial usage of AI; and controls and oversight of AI and the vendors that provide AI.

With a minor revision, the vote approved Resolution 112, which had been proposed by the ABA’s Section of Science & Technology Law.

“Lawyers increasingly are using artificial intelligence in their practices to improve the efficiency and accuracy of legal services offered to their clients,” the section said in its report recommending adoption of the resolution. “But while AI offers cutting-edge advantages and benefits, it also raises complicated questions implicating professional ethics.”

The resolution that the ABA adopted is brief and short on details. It reads:

“RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”

As proposed by the Science & Technology Law section, the resolution did not include the phrase “in the practice of law.” The HOD added those words in the version that it approved.

The section’s report outlined many of the ways AI is being used in the practice of law, such as for predictive coding in e-discovery, due diligence reviews, litigation analysis and legal research. Further, it said that a number of ethical rules apply to the use of AI, chief among them the duty of technology competence embodied within Rule 1.1, Comment 8. The report also describes issues of bias and transparency in the use of AI.

But neither the report nor the resolution provide much in the way of specifics with regard to how courts and lawyers should address these emerging issues. Regarding plans for implementation of the resolution, the section report said:

“The Section of Science & Technology Law intends to study with interested ABA entities a possible model standard for legal and ethical usage of AI by courts and lawyers. This resolution could also be used by the ABA, as well as by ABA members to promote continuing legal education related to AI.”

But the report also suggests that a purpose behind the resolution is simply to raise awareness of issues around the use of AI. “Courts and lawyers must be aware of the issues involved in using (and not using) AI, and they should address situations where their usage of AI may be flawed or biased,” the report said.