A study released this week pitted two legal research platforms against each other, Casetext CARA and Lexis Advance from LexisNexis, and concluded that attorneys using Casetext CARA finished their research significantly more quickly and found more relevant cases than those who used Lexis Advance.

The study, The Real Impact of Using Artificial Intelligence in Legal Research, was commissioned by Casetext, which contracted with the National Legal Research Group to provide 20 experienced research attorneys to conduct three research exercises and report on their results. Casetext designed the methodology for the study in consultation with NLRG and it wrote the report of the survey results.

This proves, says Casetext, the efficacy of its approach to research, which — as I explained in this post last May — lets a researcher upload a pleading or legal document and then delivers results tailored to the facts and legal issues derived from the document.

“Artificial intelligence, and specifically the ability to harness the information in the litigation record and tailor the search experience accordingly, substantially improves the efficacy and efficiency of legal research,” Casetext says in the report.

But the LexisNexis vice president in charge of Lexis Advance, Jeff Pfeifer, took issue with the study, saying that he has significant concerns with the methodology and sponsored nature of the project. More on his response below.

The Study’s Findings

The study specifically concluded:

  • Attorneys using Casetext CARA finished the three research projects on average 24.5 percent faster than attorneys using traditional legal research. Over a year, that faster pace of research would would save the average attorney 132 to 210 hours, Casetext says.
  • Attorneys using Casetext CARA found that their results were on average 21 percent more relevant than those found doing traditional legal research. This was true across relevance markers: legal relevance, factual relevance, similar parties, jurisdiction, and procedural posture.
  • Attorneys using CARA needed to run 1.5 searches, on average, to complete a research task, while those using LexisNexis needed to run an average of 6.55 searches.
  • Nine of the 20 attorneys believed they would have missed important or critical precedents if they had done only traditional research without also using Casetext CARA.
  • Fifteen of the attorneys preferred their research experience on Casetext over LexisNexis, even though it was their first experience using Casetext.
  • Every attorney said that, if they were to use another research system as their primary research tool, they would find it helpful to also have access to Casetext.

Study Methodology

The attorneys who performed the research are all experienced in legal research and have on average 25.3 years in the legal profession, the report says. They were each given a 20-minute training in using Casetext CARA. They were given a brief introduction to LexisNexis, but their familiarity with that platform “was presumed.”

Cover page of the Casetext study.

The attorneys were given three research exercises, in copyright, employment and insurance, and told to find 10 relevant cases for each. They were randomly assigned to complete two exercises using one platform and one using the other, so that roughly the same number of exercises were performed on each platform.

With each exercise, they were given litigation documents from real cases (complaints or briefs) and were asked to review and familiarize themselves with those materials. They were then given specific research tasks, such as “find ten cases that help address the application of the efficient proximate cause rule discussed in the memorandum in support of the motion for summary judgment.”

When researchers used CARA, they were able to upload the litigation materials. The study says that some researchers using Casetext were given sample search terms, but that most formulated their own search terms.

The researchers were told to track how long it took to perform each research assignment and how relevant they believed each case result to be, and to download their research histories. There were then asked a series of survey questions about their overall impressions of their research experiences.

Casetext then compiled all the information and prepared the report.

Lexis Raises Concerns

Pfeifer, who as chief product officer, North America, oversees Lexis Advance, expressed concern that the survey report failed to fully disclose the relationship between Casetext and NLRG. LexisNexis provided me with the following quotation from John Buckley, president of NLRG:

Our participation in the study primarily involved providing attorneys as participants in a study that was initially designed by Casetext. We did not compile the results or prepare the report on the study—that was done by Casetext.

Pfeifer also raised concerns about the study methodology.  “The methods used are far removed from those employed by an independent lab study,” he said. “In the survey in question, Casetext directly framed the research approach and methodology, including hand-picking the litigation materials the participants were to use.”

Finally, Pfeifer noted that participants were trained on Casetext prior to the exercise, but not on Lexis Advance. “With only a brief introduction to Lexis Advance, it was presumed that all participants already had a basic familiarity with Lexis Advance and all of its AI-enabled search features.”

“From the limited information presented in the paper, the actual search methods used by study participants do not appear to be in line with user activity on Lexis Advance,” Pfeifer said. “References to ‘Boolean’ search is not representative of results generated by machine learning-infused search on Lexis Advance.”

Casetext Responds

During a phone call yesterday, Casetext CEO Jake Heller and Chief Legal Research Officer Pablo Arredondo defended the study.

“We think this is pretty darn neutral,” Heller said. “We gave it to them [NLRG] and they ran with it.”

Heller said that Casetext and NLRG worked collaboratively to design the methodology and that NLRG gave a lot of feedback in setting up the study.

I asked them why the study singled out LexisNexis for comparison and did not include other legal research services, and particularly why they did not include the new Westlaw Edge. They said that many legal professionals view Westlaw and LexisNexis interchangeably, and that their goal was to demonstrate how they stacked up against this traditional research duopoly.

Bottom Line

When a tobacco company funds research on the health effects of cigarette smoking, it doesn’t matter what the research finds. No matter how it turns out, the study is tainted by the source of the dollars that paid for it.

That’s my problem with this study. I’m a fan of Casetext’s CARA. When I tested it for myself in May, I was impressed, writing:

In my initial testing, the addition of CARA’s AI to a query makes a powerful combination, delivering results that much more closely matched my facts and issues.

But this study is tainted by Casetext’s funding of it and control over it — down to providing the research issues and materials and even suggesting search terms. That does not mean it is wrong. It just means there is a big question mark hovering over it.

But here, Casetext’s Arredondo gets the final word. Because when I raised that issue, here is what he said: “The best study would be for an attorney to sign up for a free trial of CARA and see for themselves.”

So if, like me, you’re a skeptic by nature, give it a try.

[Side note: Coincidentally, my new LawNext podcast recently featured episodes with both Heller and Arredondo of Casetext and Pfeifer of LexisNexis. Give them a listen to hear more about their work.]

In June, I wrote about a new legal industry award intended to honor innovation in legal practice and legal technology, The Changing Lawyer Awards, for which I was to be one of the judges. Now, the results are in.

The sponsor of the awards, Litera Microsystems and its online publication The Changing Lawyer, announced the winners during a breakfast event this week at ILTACON, the annual conference of the International Legal Technology Association.

The four categories of awards recognize individuals, firms and companies for their willingness to embrace and drive change, through new technology, service models or behavior. As one of the judges, I can tell you that there were many excellent entries and that choosing winners was tough.

That said, here they are, including the briefs from Litera explaining why each winner was chosen:

Outstanding Lawyer of the Year: Natalie Munroe, Osler, Hoskin & Harcourt

This award is presented to the lawyer who has taken initiative, identified market trends and boldly figured out how to use groundbreaking processes to get ahead of them.

Natalie Munroe is a lawyer at Osler, Hoskin & Harcourt LLP where she is head of Osler Works Transactional, an innovative new service based in Ottawa that uses a combination of people, processes and technology to support the firm’s deal teams and clients, largely in the context of corporate and commercial transactions. In this role, Natalie leads a team of lawyers and other professionals, and has been integral to the ongoing success of the Osler Works Transactional initiative.

Over the last year, Natalie and her growing team have helped hundreds of clients streamline their transactional processes and save time and costs. Currently her team is focused on providing due diligence and closings while creating customized legal solutions that evolve with the client’s needs and expectations.

Outstanding Law Firm of the Year: Reed Smith

This is presented to the firm that has implemented core process changes, creative billing arrangements, or other innovations to transform the way they provide legal services.

In October 2016, Reed Smith embarked on a firm-wide initiative to create a culture of innovation. This has been the catalyst for a variety of projects, including multiple home-grown advances in legal technology and improvements in the delivery of legal services to their clients. Innovators at the firm have the firm’s commitment, facilities, technology, and manpower at their disposal to deliver new ideas and projects, which are supported throughout the process by their dedicated team of Innovation professionals. The firm provides up to 50 hours of billable credit for lawyers to work on innovation projects, credit that encourages attorneys to pursue their biggest ideas.

In 2017, the firm piloted ten such projects which improved, among other things, the firm’s approach to pricing, project management and budget tracking; deal management and performance as well as providing its clients with better ways to understand their obligations due to data loss incidents. The program has been such a success it is already in progress again in 2018.

 Outstanding CIO of the Year: Judith Flournoy, Kelley Drye & Warren

This is presented to the CIO who has championed new programs or tools, driving increased efficiency through innovative processes, or creating entirely new ways of working.

Judi Flournoy is CIO for Kelley Drye & Warren and is passionate about the importance of innovating to stay at the forefront of the legal profession. Working her way up from a deeply technical role to the CIO ranks, Judi developed an appreciation for what end users need from technology as well as what it takes to deliver the technology.

Recently, during a DMS migration, she tuned into the firm’s practice areas and how lawyers were working to effectively include the lawyers in their decisions for workspace design. By giving the firm’s lawyers buy-in, they then had a solution that was relevant to everyone at the firm. Although this morning we are honoring Judi for embracing and driving change at her firm, we could equally honor her for her leadership as a former ILTA president, chairman of ILTA LegalSEC, and a member of the Founding Circle for ALT.

Disruptor of the Year: Casetext

This is presented to an alternative legal service provider or legal tech startup that has managed to disrupt the broader legal profession through new processes, technology, or service delivery.

Founded by a team of former litigators from top law firms, as well as Ph.D. data scientists and leading A.I. engineers, Casetext helps legal researchers find cases, faster. Casetext’s CARA AI for analyzing legal briefs and documents has shown to be a truly disruptive technology, however the company has seen where large law firms can sometimes have a natural resistance to change when it comes to new technology.

Casetext recently began using an innovative gamification initiative to get firm’s to effectively rollout their solution by increasing awareness and usage among the firm’s lawyers. During their pilot program, the company developed the gamification initiative for one firm’s more than 1500 attorneys across more than 30 offices. The program increased usage of CARA AI by 159% among new users and by 300% in overall user activity.

Based on the success of their pilot program, Casetext plans to deploy this initiative across law firms they work with to increase adoption.

In addition to myself, the judges were Jeffrey Brandt, CIO for Jackson Kelly and editor of the PinHawk Law Technology Daily Digest; Casey Flaherty, principal of Procertas; Ivy Grey, senior attorney at Griffin Hamersky and the author of American Legal Style for PerfectIt; Caroline Hill, editor-in-chief for Legal IT Insider (aka The Orange Rag); and Avaneesh Marwaha, CEO of Litera Microsystems.

Congratulations to all the winners.

When the legal research service Casetext was first launched in 2013, its founders wanted to democratize access to the law by providing free access to legal research enhanced through crowdsourced annotations. As the company grew and took on venture capital, it increasingly targeted its sales and marketing to the large-firm market and implemented pricing appropriate to that market.

Now, the company is, in a sense, returning to its roots, aiming to expand its use among solo and small firm lawyers, while continuing to serve the large-firm sector. Today, it introduced Casetext for Small Law, with new features designed to make its platform more useful to smaller firms, including additions to its library, a new negative-treatment citator feature, and lower subscription prices.

The goal, cofounder and CEO Jake Heller told me this morning, is to provide smaller-firm lawyers with a complete legal research service that gives them everything they need from a provider such as Westlaw or Lexis Advance but at a more-affordable cost.

Today’s announcements include:

  • A reduction in the monthly subscription price from $139 to $89. The prepaid annual price goes from $129 a month to $65 a month. Casetext now also offers a two-seat license of $119 a month or $95 a month if paid annually. This is $1,000 less than single-state primary law plans from Westlaw or Lexis, Heller says, and $1,500 less than comparable full-state plans.
  • The addition of statutes from all 50 states. Until now, it had statutes for only five states.
  • The addition of cases from the Patent Trial and Appeal Board.
  • A new negative-treatment citator that shows red flags on cases that have been overturned or treated negatively on appeal. In the past, Casetext relied on algorithmic analysis to identify if a case had been overturned, but its new citator combines human and machine analysis for far more accurate results, Heller said.

Casetext has always had solo and small-firm subscribers, Heller said, but its goal now is to demonstrate to lawyers in smaller firms that Casetext is a viable and lower-cost alternative.

Going forward, the company will invest heavily in building out content and features targeted at smaller firms, Heller said, with more announcements soon to come.

“Our goal is to offer to small and solo attorneys everything you need, done extremely well, at a price you’ll love,” Heller said.

The World Economic Forum, a Geneva-based nonprofit focused on entrepreneurship in the global public interest, has recognized 61 early-stage companies as Technology Pioneers for their design, development and deployment of potentially world-changing innovations and technologies. Of the 61, only one is a legal technology company.

That one is Casetext, the legal research company founded in 2013 that has been a key player in pioneering the use of artificial intelligence to enhance legal research. (See, for example, my recent post: Casetext Just Made Legal Research A Whole Lot Smarter.)

The Forum’s description of Casetext says:

Casetext provides free, unlimited access to the law and charges for access to premium technologies that attorneys can use to make their research more thorough and more efficient. It is the novel application of artificial intelligence (AI) to the law that allows attorneys to use the context of what they are working on to jumpstart their research.

The Forum launched its Technology Pioneer program in 2000. Each year, it recognizes a limited number of companies and incorporates them into its initiatives, activities and events. Companies selected in the past include: Airbnb, Google, Kickstarter, Mozilla, Scribd, Spotify, Twitter and Wikimedia.

The newly selected companies will be invited to the World Economic Forum Annual Meeting of the New Champions 2018 in Tianjin, People’s Republic of China, in September, and some will also participate in the World Economic Forum Annual Meeting 2019 in Davos-Klosters, Switzerland, in January.

The Forum last selected a legal technology company as a Technology Pioneer in 2016, when its list included FiscalNote, a platform for access to legislative and regulatory data.

Most judges in a survey say that they see lawyers miss relevant precedent in their legal research and that those missing cases have impacted the outcome of a motion or proceeding.

The legal research company Casetext surveyed 66 federal and 43 state judges to learn whether missing precedent over affects the outcome of a matter. The survey asked just two questions:

  • How often do you or your clerks uncover case law that attorneys should have cited in their briefing but did not?
  • Has a party missing a precedent before your chambers impacted the outcome of a motion or proceeding?

With regard to the first question, every one of the judges have seen attorneys miss relevant cases. Over 27 percent see it happen the majority of the time or in almost every case. Less than 17 percent see it only rarely.

With regard to the second question, more than two-thirds of judges surveyed said that attorneys missing relevant precedent has materially impacted the outcome of a motion or proceeding.

Overall, 68 percent of judges have experienced missing precedent impacting the outcome of a motion or proceeding.

For Casetext, the takeaway of this survey is that lawyers should use its CARA artificial intelligence technology to help find missing cases.

I am a proponent of using tools such as CARA to help round out one’s research. But I would take it a step further and say this survey suggests the importance of using multiple services in your legal research.

Last year, I wrote about research by Susan Nevelow Mart, director of the law library and associate professor at the University of Colorado Law School, which shows that different legal research platforms deliver dramatically different results. In a comparison of six leading research providers, there was hardly any overlap in the cases that appeared in the top-10 results returned by each.

No research service is perfect, that means. While AI tools may help make a given service better, it still will not be perfect.

Download the survey: The Prevalence of Missing Precedents.

Casetext today is rolling out three major updates to its legal research platform. One is an updated design and architecture to make the site faster, cleaner and easier to use. But it is the other two updates that I want to focus on, because, in combination, they dramatically enhance the results you receive when performing legal research.

Both updates involve Casetext’s artificial-intelligence, brief-analysis software CARA (Case Analysis Research Assistant):

  • First, CARA now works not only with briefs, but with any type of litigation document — and possibly even non-litigation legal documents, based on my testing.
  • Second, CARA is now integrated into Casetext’s standard legal research workflow, so that it can be used to enhance keyword queries and deliver results that are far-better matched to the facts and issues at hand.

When Casetext introduced CARA in 2016, it was the first product of its kind on the market. When you uploaded a brief or memorandum into CARA, it would analyze it and generate a list of cases that are relevant to the issues discussed in the document but not mentioned in it. It was a powerful tool to vet an opponent’s brief or proof documents of your own. The American Association of Law Libraries named it new product of the year in 2017 and it spawned a new generation of similar brief-analysis programs.

But for CARA to work, the uploaded document needed to contain case citations. That was because its algorithm compared the cases in the uploaded document to the cases and articles in the Casetext database, looking for other cases that were usually cited alongside those cases.

With today’s update, CARA works with any kind of legal document, regardless of whether it contains citations. You can, for example, upload a complaint that contains no citations and use CARA to find cases relevant to the facts and issues.

That would be good news of itself. But even more notable, in my opinion, is the integration of CARA within the standard legal research workflow. I got to try it over the weekend in advance of today’s release, and I found that I almost always obtained search results that were directly on point to my facts and issues, and did so without having to construct elaborate queries.

Here is how it works.

Now when you go to perform a research query in Casetext, the standard search bar has two buttons under it, one labeled Keyword Search and one labeled CARA Search. If you select the latter, you are prompted to upload a legal document.

Once you do, CARA analyzes the document and then contextualizes your query to the facts and legal arguments the document contains. By using use, even a generic query can deliver on-point results.

Here is a simple example. Say I am researching whether copyright can vest in state legal materials. If I search just “copyright,” Casetext gives me 65,124 cases. The top results are not helpful, and that would be an awful lot of cases to wade through. I went on to try constructing more complex queries, but still could not put my finger on cases that were on point. My only option would have been to start reading through a long list of results in the hope of hitting on what I wanted.

But if I did that same simple query, “copyright,” and also uploaded the complaint filed by Fastcase in its lawsuit against Casemaker over copyright in Georgia administrative materials, then I obtained results that were directly on point. Even though my query remained just the generic term “copyright,” CARA used the complaint to identity the facts and issues I was interested in and deliver on-point results.

As another example, I searched “allergy” and got 10,401 cases. What I was interested in was liability for death caused by peanut allergy, but Casetext could never have known that based on a generic search for allergy.

If, however, I uploaded a complaint seeking damages for a death caused by exposure to peanut oil, I received cases that were much more directly on point.

It is important to understand that you don’t have to use simplistic queries such as I’ve used here. You can still construct more complex queries. But even then, by combining your query with a legal document, you are likely to get more precise results.

Note also that you can toggle between the CARA results and the keyword results at any time.

Just to push the envelope a bit, I even tried this with a few documents that had no relation to litigation. These experiments produced mixed results. For example, I tried uploading written testimony I had submitted to the state legislature on pending bills, and then tried entering queries related to the subjects of those bills. Twice, the CARA-assisted search performed better than a keyword search. But a third time, it seemed no better than a keyword search. I also tried uploading a memorandum that provided a broad overview of a legal topic and then entered a query relating to a specific application of that topic, with only so-so results.

These experiments with non-litigation documents lead me to think that the CARA AI-assisted research tool works better with litigation documents because they contain more precise discussions of facts and issues.

Those anomalies aside, the CARA AI-assisted research tool strikes me as a major advance in legal research. It addresses a core problem with traditional legal research — that the results are typically too broad. Even a skilled researcher capable of constructing complex Boolean searches often finds it difficult to zero in on highly pertinent results. In my initial testing, the addition of CARA’s AI to a query makes a powerful combination, delivering results that much more closely matched my facts and issues.

 

On Monday, I blogged about the launch by ROSS Intelligence, the AI-based legal research platform, of EVA, a free product that analyzes briefs and performs various functions, including determining whether the cases they cite are still good law.

After hearing about EVA, the folks at Casetext — who have their own brief analyzer, CARA — challenged ROSS to participate in a “robot fight” here at Legaltech/Legalweek in New York, where both companies are participating. The challenge was to engage in a head-to-head comparison of the two products, live in front of an audience.

ROSS declined to participate, but Casetext decided to stage it anyway, creating their own EVA account and running the same brief through both analyzers to see how the two platforms compared.

I broadcast the robot fight on Facebook Live, and you can watch it for yourself below. The principle speaker you’ll hear is Jake Heller, founder and CEO of Casetext.

[Update: If youre having trouble viewing the embedded video, view it on Facebook. For a great recap of this face-off, see this post by Joe Patrice on Above the Law.]

Every legal researcher has come across the phrase in a judicial opinion, “It is well settled that …,” or, “It is axiomatic that …” In 2014, I wrote about a prototype legal research website that mined opinions for instances of these phrases and made them searchable as a way of helping researchers find statements of established principles of law.

That website, which was called WellSettled.com, no longer exists, but its creator, Pablo Arredondo, is now chief legal research officer and cofounder of Casetext, which today is launching a much-expanded and enhanced version of that former site in the form of two new research databases, Black Letter Law and Holdings, which are intended to help researchers to quickly hone in on key rulings and common law doctrines.

The Black Letter Law feature is essentially what I described above. Casetext has searched cases for instances of phrases such as “it is well established,” “it is well settled” and “it is axiomatic,” and created a database that so far has over 100,000 of them. A researcher can use these to quickly find principles of black-letter law and the cases that support them.

The Holdings feature is a bit different. It finds the parentheticals that an opinion contains when it cites to a prior case and summarizes the prior case’s holding. These collected references form a database of other courts’ summaries of what a particular case held. Whereas the former WellSettled.com site had some 400,000 of these, Casetext’s new Holdings feature has more than 2.3 million.

This feature creates something akin to a legal treatise, Arredondo says. “What we have here is the entire judiciary of the United States for a century and a half creating concise case summaries for us.”

Lawyers use secondary sources such as treatises to find a quick and concise way to get to primary law, Arredondo says. “This database has the features of a secondary source but pulled from caselaw.”

The Black Letter Law feature is useful in several contexts, Arredondo says. One is when researching an unfamiliar area of law. This feature can help quickly reveal the skeleton of the law and identify the black-letter principles.

Another is in finding precedent that has become so settled that it is hard to find a case that stands for it. With this feature, the researcher can find cases to cite that state that a principle is well settled.

Researchers can access results from both Black Letter Law and Holdings through a standard keyword search on Casetext or via CARA, Casetext’s AI-powered legal research service.

 

The legal research company Casetext has introduced a feature that monitors an attorney’s litigation dockets for briefs and memoranda from opposing counsel and then automatically delivers a report of case law that is relevant but not included in the document.

The feature uses Casetext’s legal research assistant CARA, an analytical tool that automatically finds cases that are relevant to a legal document but not cited in the document. The standard way to use CARA is for an attorney who has received a brief, memoranda or other legal document to upload it to CARA, and CARA then performs its analysis and generates a list of relevant cases that are not mentioned in the document.

With this new feature, which Casetext is calling CARA Notifications, Casetext monitors all the PACER dockets in which an attorney has active matters. Whenever opposing counsel files a substantive document such as a brief or memorandum, Casetext retrieves the document, runs it through CARA, and delivers the report to the attorney.

“Traditionally in legal research, an attorney gets a brief and then seeks out case law to oppose the brief,” Pablo Arredondo, chief legal research officer at Casetext, explained. “The closest thing there has been to push notification is that some research services let you track a case or track a search. What we’re doing now — and I believe we’re the first — is pushing the caselaw to oppose the brief automatically based on monitoring the dockets.”

Seven firms have been using this feature on a pilot basis since Oct. 1, including Quinn Emanuel Urquhart & Sullivan, Ogletree Deakins, and Fenwick & West. The feature is being provided to them as part of their standard subscription, at no extra cost.

Casetext is analyzing the text of docket entries and documents to determine which are substantive and which are not, so that it does not run routine filings through the analysis. It only analyzes documents filed by opposing sides in the case, so the attorney’s own filings are not automatically analyzed. (Of course, subscribers can always run their documents through CARA before they file them.)

One early user called the service “anticipatory knowledge retrieval,” Arredondo said. “This is one of these early embodiments of what having an AI system can look like — software listening to the stream of a litigation record and doing what it can when an event happens.”

See also: