A legal analytics product being launched today by LexisNexis does something no other analytics product does: It analyzes the language of specific judges’ opinions to identify the cases and arguments each judge finds persuasive.

The new product, Context, also provides analytics on expert witnesses, and may be the most  comprehensive product available for this purpose.

In a way, Context is déjà vu all over again. The original version of these judge analytics was launched by Ravel Law in 2015. After LexisNexis acquired Ravel in June 2017, development pivoted to incorporating Ravel’s tools into the Lexis Advance legal research platform. The first stage of that incorporation came last June, when Lexis Advance integrated Ravel’s case law visualization tools as a product called Ravel View. Today’s launch of Context is the second major step in that integration.

That said, Context is a more powerful analytics tool than the original product, Ravel’s cofounders Nik Reed and Daniel Lewis told me during a call this week. Both now work for Lexis, where Reed is senior director of product and strategy and Lewis is general manager. Both say Context has been made stronger by the far deeper pool of data and more advanced technology available through LexisNexis.

Today’s launch, with judge analytics and expert witness analytics, is the first phase of Context. Future releases will add court analytics, company analytics, and lawyer and law firm analytics.

Judge Analytics

What makes this product unique among litigation analytics tools is that it analyzes the language of court documents. Other litigation analytics products, such as Lex Machina, which is also a LexisNexis product, or Westlaw Edge, are based on analysis of court dockets. Those products can tell you information such as how long a particular type of case is likely to last, how a judge is likely to rule on a particular type of issue, or how other lawyers have fared before a particular judge. Such information is derived from the docket.

Motion outcomes for U.S. District Judge William Alsup.

By contrast, Context analyzes the text of court documents to find language and citations that could prove persuasive to a particular judge. Specifically, it tells you how a judge has ruled on 100 different types of motions and the judges, cases and text the judge most commonly relied on in making those rulings.

Say you are filing a motion for summary judgment. Using Context, you could look up the judge and determine the rate at which that judge grants or denies summary judgment. You could see all of the specific cases in which the judge made these rulings. Then, going deeper, you can see the opinions that the judge most frequently cites in summary judgment cases, and even the specific text from those opinions that the judge most frequently relies on.

Citation analytics show the cases and judges a judge finds persuasive.

With this information, you can tailor your memorandum to fit the judge. You can cite the judges, cases and even passages that you know the judge has relied on in the past and finds persuasive. That is a powerful tool.

Context’s judge analytics cover all federal judges, including appellate judges, and some, but not all, state court judges. Because appellate judges do not rule on litigation motions, motion analytics are not available for them (unless, I presume, they were previously a trial judge). However, citation analytics do work for appellate judges, so you can see for opinions authored by that judge the cases and text they most commonly rely on.

There is no backward time limit to Context’s coverage. If a judge has been on the bench for decades, the entirety of the judge’s output is included in Context’s analytics.

Expert Witnesses

The expert witness analytics released today are a new analytics product not previously offered as part of Ravel’s original set of analytics tools. The reason for that is simple: Ravel did not have data on expert witnesses, but LexisNexis has an extensive set of such data, covering more than 380,000 experts.

For each expert covered by Context, a user can see an overview that provides biographical and experiential information about the expert. For many experts, this includes not only the expert’s current CV, but also prior versions of the CV has it has been presented over the years. The overview also shows whether the expert is typically hired by plaintiffs or defendants, the number of cases per year the expert is engaged in, and the expert’s experience by jurisdiction and areas of law.

A deeper layer provides analytics on Daubert challenges to the expert. Nik Reed calls this a scorecard. For each expert, it shows the challenges by factor — methodology, qualification, relevance or procedural — and then the outcome. As you look at this scorecard, you can view each of the opinions in which the challenge was decided.

Reed says these analytics can be useful both when retaining an expert, to see how that expert has fared historically, and when challenging an opponent’s expert, to see which grounds have been successful in the past in excluding that expert.

These expert analytics cover only federal court challenges, but Reed said state court challenges will be added by the middle of 2019. Also in the works is attorney and firm connections, so that a user can see the specific attorneys and firms to which an expert has been connected.

There are other expert-witness analytics products on the market. Among the most comprehensive is Courtroom Insight, which has its own expert witness analytics and which also is integrated in the Fastcase AI Sandbox. It covers some 100,000 expert witnesses.

Future Development

Reed and Lewis said that the product being launched today is just the beginning. The next module to be added will be court analytics, which will break down the handling of specific types of motions by courts, rather than just specific judges within those courts. These will be similar in concept, but not scope, to the court analytics previously offered by Ravel, which I wrote about here.

After that will come a module providing analytics on companies as litigants, and then a module on lawyers and law firms that will show data such as success rates on particular types of motions and before specific courts or judges. Again, these will be similar to the law firm analytics previously offered by Ravel, which I wrote about here.

This latter module will be useful to researchers not just for litigation strategy, but also for performing competitive analysis and intelligence, Reed and Lewis said.

Pricing and Availability

LexisNexis is offering 30 days of free access to Context to any Lexis Advance subscriber who registers at www.lexisnexis.com/context. The free trial will run from Jan. 2 to Jan. 31, 2019. In addition, LexisNexis is providing free access starting today to all law school faculty, and to all law students who have a Lexis Advance ID starting Jan. 2.

Also starting today, access to Context will be provided to all current customers of Ravel Analytics through its legacy site.

Otherwise, for Lexis Advance customers, Context will be sold as an add-on to their subscription. Subscribers will be able to choose from among different modules. The two products released today will be sold as a single module. The modules will also be sold in packages oriented to either legal research or competitive intelligence.

LexisNexis declined to provide specifics on pricing.

“It’s been exciting for Nik and me personally to not just rebuild this, but to add new bells and whistles and to expand the functionality,” Lewis told me. “We have made it better in meaningful ways.”

A study released this week pitted two legal research platforms against each other, Casetext CARA and Lexis Advance from LexisNexis, and concluded that attorneys using Casetext CARA finished their research significantly more quickly and found more relevant cases than those who used Lexis Advance.

The study, The Real Impact of Using Artificial Intelligence in Legal Research, was commissioned by Casetext, which contracted with the National Legal Research Group to provide 20 experienced research attorneys to conduct three research exercises and report on their results. Casetext designed the methodology for the study in consultation with NLRG and it wrote the report of the survey results.

This proves, says Casetext, the efficacy of its approach to research, which — as I explained in this post last May — lets a researcher upload a pleading or legal document and then delivers results tailored to the facts and legal issues derived from the document.

“Artificial intelligence, and specifically the ability to harness the information in the litigation record and tailor the search experience accordingly, substantially improves the efficacy and efficiency of legal research,” Casetext says in the report.

But the LexisNexis vice president in charge of Lexis Advance, Jeff Pfeifer, took issue with the study, saying that he has significant concerns with the methodology and sponsored nature of the project. More on his response below.

The Study’s Findings

The study specifically concluded:

  • Attorneys using Casetext CARA finished the three research projects on average 24.5 percent faster than attorneys using traditional legal research. Over a year, that faster pace of research would would save the average attorney 132 to 210 hours, Casetext says.
  • Attorneys using Casetext CARA found that their results were on average 21 percent more relevant than those found doing traditional legal research. This was true across relevance markers: legal relevance, factual relevance, similar parties, jurisdiction, and procedural posture.
  • Attorneys using CARA needed to run 1.5 searches, on average, to complete a research task, while those using LexisNexis needed to run an average of 6.55 searches.
  • Nine of the 20 attorneys believed they would have missed important or critical precedents if they had done only traditional research without also using Casetext CARA.
  • Fifteen of the attorneys preferred their research experience on Casetext over LexisNexis, even though it was their first experience using Casetext.
  • Every attorney said that, if they were to use another research system as their primary research tool, they would find it helpful to also have access to Casetext.

Study Methodology

The attorneys who performed the research are all experienced in legal research and have on average 25.3 years in the legal profession, the report says. They were each given a 20-minute training in using Casetext CARA. They were given a brief introduction to LexisNexis, but their familiarity with that platform “was presumed.”

Cover page of the Casetext study.

The attorneys were given three research exercises, in copyright, employment and insurance, and told to find 10 relevant cases for each. They were randomly assigned to complete two exercises using one platform and one using the other, so that roughly the same number of exercises were performed on each platform.

With each exercise, they were given litigation documents from real cases (complaints or briefs) and were asked to review and familiarize themselves with those materials. They were then given specific research tasks, such as “find ten cases that help address the application of the efficient proximate cause rule discussed in the memorandum in support of the motion for summary judgment.”

When researchers used CARA, they were able to upload the litigation materials. The study says that some researchers using Casetext were given sample search terms, but that most formulated their own search terms.

The researchers were told to track how long it took to perform each research assignment and how relevant they believed each case result to be, and to download their research histories. There were then asked a series of survey questions about their overall impressions of their research experiences.

Casetext then compiled all the information and prepared the report.

Lexis Raises Concerns

Pfeifer, who as chief product officer, North America, oversees Lexis Advance, expressed concern that the survey report failed to fully disclose the relationship between Casetext and NLRG. LexisNexis provided me with the following quotation from John Buckley, president of NLRG:

Our participation in the study primarily involved providing attorneys as participants in a study that was initially designed by Casetext. We did not compile the results or prepare the report on the study—that was done by Casetext.

Pfeifer also raised concerns about the study methodology.  “The methods used are far removed from those employed by an independent lab study,” he said. “In the survey in question, Casetext directly framed the research approach and methodology, including hand-picking the litigation materials the participants were to use.”

Finally, Pfeifer noted that participants were trained on Casetext prior to the exercise, but not on Lexis Advance. “With only a brief introduction to Lexis Advance, it was presumed that all participants already had a basic familiarity with Lexis Advance and all of its AI-enabled search features.”

“From the limited information presented in the paper, the actual search methods used by study participants do not appear to be in line with user activity on Lexis Advance,” Pfeifer said. “References to ‘Boolean’ search is not representative of results generated by machine learning-infused search on Lexis Advance.”

Casetext Responds

During a phone call yesterday, Casetext CEO Jake Heller and Chief Legal Research Officer Pablo Arredondo defended the study.

“We think this is pretty darn neutral,” Heller said. “We gave it to them [NLRG] and they ran with it.”

Heller said that Casetext and NLRG worked collaboratively to design the methodology and that NLRG gave a lot of feedback in setting up the study.

I asked them why the study singled out LexisNexis for comparison and did not include other legal research services, and particularly why they did not include the new Westlaw Edge. They said that many legal professionals view Westlaw and LexisNexis interchangeably, and that their goal was to demonstrate how they stacked up against this traditional research duopoly.

Bottom Line

When a tobacco company funds research on the health effects of cigarette smoking, it doesn’t matter what the research finds. No matter how it turns out, the study is tainted by the source of the dollars that paid for it.

That’s my problem with this study. I’m a fan of Casetext’s CARA. When I tested it for myself in May, I was impressed, writing:

In my initial testing, the addition of CARA’s AI to a query makes a powerful combination, delivering results that much more closely matched my facts and issues.

But this study is tainted by Casetext’s funding of it and control over it — down to providing the research issues and materials and even suggesting search terms. That does not mean it is wrong. It just means there is a big question mark hovering over it.

But here, Casetext’s Arredondo gets the final word. Because when I raised that issue, here is what he said: “The best study would be for an attorney to sign up for a free trial of CARA and see for themselves.”

So if, like me, you’re a skeptic by nature, give it a try.

[Side note: Coincidentally, my new LawNext podcast recently featured episodes with both Heller and Arredondo of Casetext and Pfeifer of LexisNexis. Give them a listen to hear more about their work.]

Ever since LexisNexis acquired the legal research startup Ravel last June, its plan has been to integrate Ravel’s caselaw visualization technology and data analytics into Lexis Advance. Earlier this year, I published a preview of the integration of the visualization technology. Today, LexisNexis is formally launching that integration and beginning to roll it out to customers.

The name given to this new tool within Lexis Advance is Ravel View. It looks and functions very much like Ravel did as a standalone platform, but with one significant difference — the Ravel visualizations now include Shepard’s citation information.

Ravel’s concept all along has been to display search results visually, along a cluster map that shows the relationships among cases and their relative importance to each other. This visual depiction provides researchers with a quicker understanding of the overall landscape of relevant cases and also helps identify the cases that are most important.

In the standard search-results view, click the icon in the upper right corner to switch to Ravel View.

Now in Lexis Advance, when a user conducts a query, the default results page will remain the traditional list of relevant cases. But the user will be able to click an icon in the upper rate of the screen to toggle the visual view, which displays the cluster map on the left side of the screen and the list of cases on the right.

Ravel View shows search results visually, with cases represented by circles on a cluster map.

Ravel View maps the top 75 cases relevant to the user’s search. Each case is represented as a circle, with lines between circles showing the citations between cases. This visualization shows:

  • Citation frequency. The bigger the circle, the more frequently that case has been cited by other cases, a measure of its importance.
  • Chronology. Ravel View maps cases across time, revealing trends and patterns in the development of precedent.
  • Jurisdiction. The vertical axis shows the Supreme Court at the top, followed by federal and state courts below. This shows the governing relationships among cases based on their court hierarchy.
  • Relevance. The higher a circle appears within each jurisdiction band, the more relevant the case is to the search.

When a user clicks on any circle, Ravel View displays the case name and citation relationships, and elevates the case to the top of the search results in the right panel so users can read the full  description.

Hover over a line connecting two cases to show the Shepard’s treatment.

The incorporation of Shepard’s comes by way of the lines connecting each case. The lines are colored green, yellow or red to correlate to Shepard’s signal colors for positive and negative treatments. By hovering over a line, the user can display the language from the citing case that illustrates why Shepard’s assigned that treatment.

Ravel View will become available to every Lexis Advance subscriber on a phased-in basis over the next couple of weeks. By mid-July, it should be available to everyone.

Ravel CEO Daniel Lewis, who conceived the visual legal research platform while a second-year student at Stanford Law School, told me earlier this week that he is particularly excited about the integration of the Shepard’s citator information into Ravel’s visualizations.

“The highlight is the cool combination of taking the technology we had and adding it to the content and expertise that Lexis has to create this mashup,” he said.

Still to come is the integration of Ravel’s analytics into Lexis Advance. Ravel’s suite of analytics included court, judge and case analytics. The first of those integrations will come out over the next couple months, he said, in the form of a new product on the Lexis platform. That effort is being led by Ravel cofounder Nick Reed.

For now, Ravel continues to operate as a standalone platform. But once the integration is complete, Ravel’s customers will be transitioned to Lexis, Lewis said. “The things you liked in Ravel you will be able to do better in Lexis,” he said.

Daniel Lewis was just in his second year at Stanford Law School when he had an idea for a different way to do legal research, as I recounted in this 2014 ABA Journal article. His idea was to display search results visually, along a cluster map that shows the relationships among cases and their relative importance to each other. Shortly after he graduated in 2012, he and classmate Nicholas Reed had launched the legal research platform derived from his idea, Ravel Law. Last June, five years after its founding, Ravel was acquired by legal research giant LexisNexis.

By that time, Ravel had also developed a suite of analytics that included court analytics, judge analytics and case analytics. At the time of the acquisition, Jeff Pfeifer, VP of product management for LexisNexis, told me that the acquisition — which followed the acquisition of another legal analytics company, Lex Machina — was part of the company’s broader vision “to create the data-driven lawyer of the future.”

From the outset, the plan was to integrate Ravel’s data visualization technology and data analytics into Lexis Advance and other Lexis products, and to bring those integrations to market starting within the first quarter of 2018.

They are, it seems, right on schedule. At Legalweek in New York this week, I met with Pfeifer and Lewis and saw a preview of the integration of Ravel’s visualization technology into Lexis Advance. The integration is scheduled to be released early in March, said Pfeifer, who reaffirmed the company’s commitment to enabling “data-driven law.”

Preview of Integration

The two images that follow are a preview of this integration.

In the first image, you see what will be the default view after a user conducts a search. This looks much like it would look today, but with one notable change. The circle to the right of each result is what Pfeifer jokingly called the “Shepard’s donut.” It uses the colors of Shepard’s signal indicators to give the user a quick visual overview of how the case has been treated in subsequent citations.

By clicking View Mode in the upper right corner of the screen, the user can switch over to the search visualization mode based on the Ravel integration. This will look familiar to anyone who has ever seen Ravel. It uses the same cluster map of larger and smaller bubbles showing connections among cases and relative importance of cases, all arranged along a timeline.

One notable addition to the visualization is Shepard’s citation data. Now, the lines connecting cases include a colored dot, with the dot reflecting the Shepard’s signal indicator. Click on the dot to bring up a selection of text from the citing case that shows the basis for the Shepard’s treatment.

Analytics on Experts

As I said, the visualization will become available in Lexis Advance in March. In a second phase, scheduled for May, Ravel’s analytics tools will be incorporated into Lexis Advance. This will allow Ravel’s court, judge and case analytics to be used within Advance, and will extend the reach of those analytics to a broader selection of state, as well as federal, trial courts.

The May release will also use Ravel’s analytics to provide a greater depth of information about expert witnesses.  expand the analytics to include expert witnesses. A current product, LexisNexis Litigation Profile Suite, will be replaced by an updated product with a new name — as yet to be decided — that marries Ravel’s analytics with the existing profiles to provide more information on experts, such as how often they have been challenged, how often they testify, and more. The new Profile Suite product will also have more in-depth analytics on parties, judges and neutrals.

(Profile Suite will continue to be available for current customers who prefer not to move to the new product, Pfeifer said.)

Harvard Case Data

Before its acquisition by LexisNexis, Ravel had embarked on a project with Harvard Law School to digitize all U.S. case law. As I reported at the time of the acquisition, both Harvard and LexisNexis committed to completing that project, carried out under the auspices of the Harvard Library Innovation Lab.

The scanning of all those cases wrapped up nearly a year ago, but the final clean-up and digitization was just completed, Pfeifer and Lewis told me. Those cases have now been added to the Lexis Advance database. The total collection from Harvard was 5-7 million documents, and a “few hundred thousand” of them were cases not previously included in Lexis, Pfeifer said. That brought the number of case documents in Lexis Advance from 13.5 million to nearly 14 million.

In addition, later this year, Lexis Advance will be adding PDFs of all the cases from the Harvard collection. These include cases from before the American Revolution up to 2016.

Part of the agreement between Ravel and Harvard was that access to these cases would remain free to everyone. After the acquisition, LexisNexis and Harvard confirmed that commitment. Pfeifer and Lewis said this week that the Ravel website will be maintained as the primary site for the public to access those cases.

 

When LexisNexis acquired the legal analytics platform Lex Machina in November 2015, the the plan was to integrate Lex Machina’s analytics into various LexisNexis products and, in particular, its Lexis Advance legal research platform. Last January, it took the first step in that direction when it integrated judge analytics into Lexis Advance, and later in the year it added integration for law firm analytics.

Yesterday LexisNexis rolled out the third such integration, attorney analytics. Now, when Lexis Advance users are viewing full-text cases, they can click on the names of the attorneys involved in the case and view summary charts showing data about the attorney, such as the attorney’s case-filing history.

This works for the practice areas currently covered by Lex Machina: patent, copyright, trademark, antitrust, securities, employment, commercial, product liability and federal bankruptcy appeals.

From that summary page, Lexis Advance users who also have a subscription to Lex Machina can drill further into the full Lex Machina set of analytics.

(Note that attorneys’ names have been blotted out from these images.)