It was eight years ago that Thomson Reuters unveiled WestlawNext to great fanfare, eventually phasing out classic Westlaw entirely. A lot has changed in those eight years, both in the legal market and in legal technology.

Today, Thomson Reuters is officially announcing its next-generation legal research platform, Westlaw Edge, which uses advanced artificial intelligence and analytics with the goal of helping legal professionals find answers and perform research more efficiently and with better results.

The most-striking features of Westlaw Edge are:

  • An enhanced, AI-powered version of the KeyCite citator that provides warnings that cases may no longer be good law in circumstances that traditional citators could not identify.
  • WestSearch Plus, an AI-driven legal research tool that guides lawyers quickly to answers to specific legal questions.
  • Integrated litigation analytics, providing detailed docket analytics covering judges, courts, attorneys and law firms, for both federal and state courts.
  • Statutes Compare, a tool that allows researchers to compare changes to statutes.

Westlaw Edge also includes a variety of user-experience improvements throughout the platform.

This post was a LitigationWorld Pick of the Week.

Westlaw Edge will be offered as a subscription upgrade to Westlaw subscribers. Thomson Reuters will continue to operate Westlaw in its current form until 2024, but it will focus most major development going forward on the new Westlaw Edge platform.

Bloggers and journalists attended a briefing yesterday at Thomson Reuters headquarters in New York where company executives presented an overview of Westlaw Edge. I’ve been given a password to test it, and I will write more when I get a chance to try it hands-on.

Here are key points from yesterday’s briefing.

Enhanced Citator

Westlaw Edge’s enhanced citator overcomes a shortcoming inherent in most citators, according to Mike Dahn, senior vice president, Product Management. In order to identify when a case overrules or invalidates a prior case, there needs to be a direct citation relationship between the cases. As a result, cases may no longer be good law, but may not be flagged by citators.

This enhanced citator uses machine learning and natural language processing to train the system to identify cases that may no longer be good law, even when there is no direct citation relationship. For those cases, it now identifies them with a new orange KeyCite warning. The warning does not necessarily mean the case has been overturned, but it tell the research to check into that.

WestSearch Plus

Currently on Westlaw, as well as on other legal research services, when you enter a search, what you get back are documents. You then work your way through those documents in pursuit of the subject of your research.

But sometimes what you want is simply to enter a question and get an answer as quickly as possible. That is what WestSearch Plus strives to provide. Ask a question and it will take you not only to relevant documents, but also to answers to thousands of questions lawyers are likely to ask. Westlaw Edge uses AI tools such as machine learning and natural-language processing, as well as Westlaw’s own headnotes and Key Number System, to suggest questions as you type a query and provide answers to those questions.

“I do want to be clear: We have not built a robot lawyer,” Dahn said. This will not answer every question a lawyer will have. But what it will do, he said, is help them get to the answer much faster.

This feature will also be available in a new Westlaw Edge iPhone app the company is releasing.

Litigation Analytics

Litigation analytics have become increasingly popular in recent years, with a number of products on the market – most notably Lex Machina, which was acquired by LexisNexis and is now becoming a core part of its research platform.

With Westlaw Edge, Thomson Reuters appears to have come out of the gate running in the litigation analytics competition. It appears to be a powerful set of analytics and to include more data and practice areas from both federal and state courts than any other product on the market.

It covers dockets for every federal case types except bankruptcy and includes some 8 million federal cases. It includes analytics for 13 types of federal motions – plus filters for subsets of those motions types – and for expert challenges.

Analytics are incorporated into research results. Previously in Westlaw, when you clicked on on the name of the judge in an opinion, you would see biographical information. With Westlaw Edge, you still get that, but in addition you get analytics showing a wealth of information about how the judge handles different types of cases and motions, how the judge’s opinions stand up on appeal, how long the judge takes to handle matters, and more.

You can drill down through the visualizations to get to the underlying documents. You can apply searches and filters to see how a judges handles a certain type of motion in a certain type of case, and then see the actual ruling. In addition, related litigation documents are also available, such as the parties’ briefs and motions.

Among the ways these analytics can be used, according to Dahn:

  • Better understand a judge.
  • Better understand opposing counsel.
  • Venue evaluation for plaintiffs.
  • Better understand case and motion timelines.
  • Locate and evaluate local counsel.
  • Prepare client pitches, showing the strengths of your firm or the weaknesses of others.
  • Faster legal research.

State court analytics are not as robust as the federal analytics, in that they generally lack motion-level outcomes. (Although some state jurisdictions do have motion-level outcomes, including New York and Cook County in Illinois.)

Statutes Compare

Statutes Compare is a tool for analyzing changes to statutes. The tool compares two versions of a statute, using red-lining to show changes. By default, the tool compares the current version of a statute to the prior version, but the user can select any two historical versions for comparison.

The tool can be used with all federal and state statutes. However, there are a small number of states for which older historical versions are not available, only the most-recent changes.

User Experience Improvements

Westlaw Edge introduces a variety of small improvements to the user experience. One, for example, allows you to restore filters used in a previous search to apply them again in a new search. Another lets you expand or contract the synopsis at the top of a case with one click. Still another lets you see notes you’ve added to documents without leaving the document.

Dahn said that a lot of work was put into refining how documents are displayed, including refinements to fonts, line spacing and the like. A new document-level table of contents on the left of the screen shows you where you are in a document as you move through it. If you are in the dissent of an appellate opinion, for example, the table of contents will show that.

As noted above, Thomson Reuters will offer Westlaw Edge as an upgrade to Westlaw and will continue to operate both platforms. The company declined to specify pricing, but said it would be “extremely reasonable.” It will be rolled out to law professors later this summer and to law students next spring.

Better AI?

A key point that was repeatedly emphasized during yesterday’s briefing is that AI isn’t easy — at least AI done well. It’s not clear whether this was meant as a jab at other products on the market, but Thomson Reuters says its AI is distinguished in three significant ways:

  • Editorial quality matters for AI performance, and TR has the largest team of attorney editors in the industry who concisely describe the essence of cases and add important terms for better recall and understanding.
  • The quality of the data also matters, and TR has data enhanced by the Key Number system and sophisticated citation mapping.
  • The team matters, and TR has a team of leading research scientists at its Center for AI and Cognitive Computing who developed the AI that powers Westlaw Edge.

Bottom Line

I’ll be testing out the new Westlaw Edge and will write more about my impressions and specific features. If the actual product lives up to the presentation I saw yesterday, then it is a major step forward, not only for Thomson Reuters, but for the industry.

At a time when any number of companies are offering legal research and intelligence tools using AI and analytics, Westlaw Edge appears to have the edge, offering features such as quick answers to legal questions, sophisticated analytics across all federal and state courts, better warnings about potentially bad case law, and more.

Westlaw and LexisNexis are typically viewed as the dominant leaders among legal research services. But a recent survey found that Fastcase is in a virtual dead heat with Westlaw and LexisNexis among smaller-firm lawyers.

The just-released information comes from a survey conducted in 2016 by law practice management company Clio. The survey asked Clio users what tool they use for legal research. Out of 2,162 respondents, their top-three answers were:

  1. Westlaw, 20.58 percent (445 respondents).
  2. Fastcase, 20.35 percent (440 respondents).
  3. LexisNexis, 20.21 percent (437 respondents).

As you can see from the numbers, this is a virtual tie, with only eight responses separating the top three.

Next in order was Google Scholar, named by 13.6 percent of respondents, and Casemaker, named by 10.22 percent of respondents.

Two important caveats to note:

  1. Clio’s users are primarily solo, small and mid-sized firms. Larger firms are not represented in these numbers.
  2. Clio and Fastcase have an integration which may prompt more Clio users to use Fastcase.

“There’s no ‘big two’ in legal research anymore,” Fastcase CEO Ed Walters said this morning. “From now on, it’s the big three – and Fastcase is still growing.”

Following my post earlier this week about the benchmark report published by Blue Hill Research that assessed the ROSS Intelligence legal research platform, I had several questions about the report and many readers contacted me with questions of their own. The author of the report, David Houlihan, principal analyst at Blue Hill, kindly agreed to answer these questions.

The study assigned researchers to four groups, one using Boolean search on either Westlaw or LexisNexis, a second using natural language search on either Westlaw or LexisNexis, a third using ROSS and Boolean search, and fourth using ROSS and natural language search. Why did none of the groups use ROSS alone?

bluehill-whitepaper@2xHoulihan: Initially, we did plan to include a “ROSS alone” group, but cut it before starting the study. We did this for two primary reasons. One: the study was relatively modest and we wanted to keep our scope manageable. Focusing on one use case (ROSS combined with another tool) was one way to do that. Two: I don’t think an examination of “ROSS alone” is particularly valuable at this time. AI-enabled research tools are in early stages of technological maturity, adoption, and use. ROSS, for example, only provides options for specialized research areas (such as bankruptcy), which means assessing it as a replacement option for Westlaw or Lexis is premature. Instead, we focused our research on the use case with the currently viable value proposition. That said, I have no doubt that there will need to be examinations of the exclusive use of AI-enabled tools over time.

The report said that you used experienced legal researchers, but it also said that they had no experience in their assigned research platforms. How is it possible for an experienced legal researcher to have no experience in Westlaw or LexisNexis? Did you have Westlaw users assigned to Lexis, and vice versa?

Houlihan: You have it. Participants were not familiar with the particular platforms that they used. They were proficient in standard research methods and techniques, but we intentionally assigned them to unfamiliar tools. So, as you say, an experienced Westlaw user could be put on LexisNexis, but not Westlaw. The goal was to minimize any special advantage that a power user might have with a system and approximate the experiences of a new user. I think readers of the report should bear that in mind. I expect different results if you were to look at the performance of the tools with users with other levels of experience. That’s another area that deserves additional investigation.

The research questions modeled real-world issues in federal bankruptcy law, but you chose researchers with minimal experience in that area of law. Why did you choose researchers with no familiarity in bankruptcy law?

Houlihan: In part, for similar reasons that we assigned tools based on lack of familiarity. We were attempting to ascertain the experiences of an established practitioner that was tackling these particular types of research problems for the first time as a base line.

Moreover, introducing participants with bankruptcy experience and knowledge adds some insidious challenges. You cannot know whether your participants’ existing knowledge is affecting the research process. You also need to figure out what experience level you do wish to use and how to ensure that all of your participants are operating at that level. Selecting participants that were unfamiliar with bankruptcy law eliminated those worries. Although, again, a comparison of the various tools at different levels of practitioner expertise would be a study I would like to see.

I would think that the bankruptcy libraries on ROSS, Westlaw and LexisNexis do not mirror each other. Given this, were researchers all working from the same data set or were they using whatever data was available on whatever platform?

David Houlihan
David Houlihan

Houlihan: Researchers were limited to searches of case law, but otherwise they were free to use the respective libraries of the tools as they found them. It strikes me as somewhat artificial to try to use identical data sets for a benchmark study like this. If we were conducting a pure technological bake-off of the search capabilities of the tools, I think that identical data sets would be the right choice. However, that’s not quite what Blue Hill is after. As a firm, we try to understand the potential business impact of a technology, based on what we can observe in real-world uses (or their close approximations). To get there, I would argue that you need to account for the inherent differences that users will encounter with the tools.

With regard to researchers’ confidence in their results, wouldn’t the use of multiple platforms always enhance confidence? In other words, if I get a result just using Lexis or get a result using both Lexis and ROSS, would the second situation provide more confidence in the result because of the confirmation of the results? And if so, would it matter if the second platform was ROSS or anything else?

Houlihan: I think that’s right, but we weren’t trying to show anything more profound. For the users of ROSS in combination with a traditional tool, we saw higher confidence and satisfaction than users of just one of those traditional tools with a great deal of consistency.

Whether it is always true that the use of two types of tools, such as Boolean and Natural Language, will yield the same response, I can’t say. We didn’t include that use case. As one of your readers rightfully pointed out, the omission is a limitation with the study. That is yet another area where more research is needed. I fear I am repeating myself too much, but the technology is new and the scope of what needs to be assessed is not trivial. It is certainly larger than what we could have hoped to cover with our one study.

For what it is worth: I wondered at the outset whether two tools would erode confidence. I still do. We tended to see fairly different sets of results returned from different tools. For example, there were a number of relevant cases that consistently appeared in the top results of one tool that did not appear as easily in another tool. To my mind, that undermines confidence, since it encourages me to ask what else I missed. That reaction was not shared by our participants, however.

With respect to the groups assigned to use ROSS and another tool, did you measure how much (or how) they used one or the other?

Houlihan: We did, but we opted to not report on it. The relative use of one tool or another varied between researchers. As a group, we did observe that participants tended to rely more on the alternative tool for the initial questions and to increase their reliance on ROSS over the course of the study. I believe we make a note about it in the report. However, we did not find that this was a sufficiently strong or significant trend to warrant any deeper commentary without more study.

(This question comes from a comment to the original post.) It appears that the Westlaw and Lexis results are combined in the “natural language” category. That causes me to wonder if one or the other exceeded ROSS in its results and they were combined to obscure that.

Houlihan: The reason we combined the tools was because we never intended to compare Westlaw v. ROSS or Lexis v. ROSS. We were interested in how our use case compared to traditional technology types used in legal research. We used both Lexis & Westlaw within each assessment group to try to get a merged view of the technology type that wasn’t overly colored by the idiosyncrasies that the particular design of a tool might bring. In fact, we debated whether to mention that Westlaw or LexisNexis tools were used in the study at all. Ultimately, we identified them as a sign that we were comparing our use case to commonly used versions of those technology types. As for how individual tools performed, all I feel with can say reliably is that we did not observe any significant variation in outcomes for different tools of the same type.

A huge thanks to David Houlihan for taking the time to answer these. The full report can be downloaded from the ROSS Intelligence website. 

Westlaw has had an iPad app since 2010. Surprisingly, however, it has not had its own iPhone app. iPhone users could access Westlaw through a mobile-optimized site. But there was no app.

Well, now there is. Last Wednesday, Thomson Reuters introduced a Westlaw app built specifically for use on the iPhone.

This is not an app for heavy-duty research on the road. The primary thrust of the app is current awareness. It is designed for users to keep up with Westlaw alerts and docket updates, to track company news, and to follow practice area developments.

That said, the app can be used to search Westlaw content and to access and save documents. Just as with Westlaw, you can select the jurisdictions to search and then enter your query. (Tip: Pick the jurisdictions before entering the query, because if you do it the other way around, the query disappears.)

Search results are displayed as a global overview across all content types — cases, statutes, regulations, Practical Law, etc. — and can be filtered by content type, so you can view just cases or just statutes. When you view a document, your search terms are highlighted in yellow.

Documents that you find using the app can be saved to your research folders, where they will be synchronized with and accessible from the desktop and iPad versions. You can also print and email documents from the app. (You will need an AirPrint-enabled printer.)

The app does not show full KeyCite treatment for cases. However, it does flag cases for which there is negative treatment and allows you to view the negative treatment.

For current awareness, the app lets you track companies and practice areas and receive Westlaw alerts. News stories and other updates can also be saved to your folders, emailed or printed.

Of course, you will need a Westlaw account to use the app. Once you install the app, you can sign in using your OnePass login.