At the annual meeting of the American Bankruptcy Institute on April 20, I moderated a plenary panel, “Artificial Intelligence: Why It Matters To Your Future Bankruptcy Practice.” In advance of the panel, the ABI recorded a series of video interviews I conducted with some of the panelists. Here is my interview with Thomas Hamilton, vice president of strategy and operations at ROSS Intelligence.

On Monday, I blogged about the launch by ROSS Intelligence, the AI-based legal research platform, of EVA, a free product that analyzes briefs and performs various functions, including determining whether the cases they cite are still good law.

After hearing about EVA, the folks at Casetext — who have their own brief analyzer, CARA — challenged ROSS to participate in a “robot fight” here at Legaltech/Legalweek in New York, where both companies are participating. The challenge was to engage in a head-to-head comparison of the two products, live in front of an audience.

ROSS declined to participate, but Casetext decided to stage it anyway, creating their own EVA account and running the same brief through both analyzers to see how the two platforms compared.

I broadcast the robot fight on Facebook Live, and you can watch it for yourself below. The principle speaker you’ll hear is Jake Heller, founder and CEO of Casetext.

[Update: If youre having trouble viewing the embedded video, view it on Facebook. For a great recap of this face-off, see this post by Joe Patrice on Above the Law.]

The folks at ROSS Intelligence are sometimes secretive about their artificial intelligence-based legal research platform, limiting access to and even demonstrations of the platform to paying customers. The rest of us have been left to wonder what all the buzz is about, and we’ve been wondering all the more since October, when ROSS announced an $8.7 million Series A funding round, adding to its earlier $4.3 million seed round, and several notable hires, including Scott Sperling, formerly head of sales at WeWork, as vice president of sales, and Daniel Rodriguez, outgoing dean of Northwestern Pritzker School of Law, as an advisor to assist with the company’s law school expansion and access to justice initiatives.

That secretive state of affairs took a 180-degree turn today as ROSS unveiled a free service — available to any legal professional — that takes advantage of some of the same AI that powers ROSS’s commercial product. While this new product is by no means as full-featured or powerful as the commercial platform, it gives every lawyer the opportunity to get a taste of how AI can enhance and simplify legal research.

The new product is called EVA. In a nutshell, it is a brief analyzer. But it is more than that. Most notably, it is also a tool for checking the subsequent history of cited cases and determining if they are still good law, in the vein of  the LexisNexis Shepard’s and Thomson Reuters KeyCite. It also can be used to find other cases that are similar to a given case or to find cases that have similar language or that contain the same quotes.

ROSS says EVA will supercharge a lawyer’s research. As of this writing, I have seen a demonstration of EVA, but have not yet used it. However, based on the demonstration alone, and on the simple fact that EVA is free to use, I cannot imagine ever again filing a brief or reviewing an opponent’s brief without running it through EVA.

EVA is not the first brief analyzer on the market. That honor goes to Casetext’s CARA, which the American Association of Law Libraries selected as its 2017 new product of the year. Judicata also has a powerful brief analyzer, which it calls Clerk and which I reviewed in October. Each of these differs in what it can do. In addition, CARA requires a paid subscription and Clerk so far works only for California cases.

Analyze A Brief

Over the weekend, Andrew Arruda, cofounder and CEO of ROSS, showed me how EVA works.

Drag and drop a brief to analyze it.

To analyze a brief using EVA, drag and drop it onto the EVA website. The first time you use EVA, you will be required to register. However, as I already said, registration is free. Use it to check a brief you’ve written or one you’ve received from an opponent. Once you upload a brief, you can perform three general functions:

  • Check if the cases cited in the brief are still good law.
  • View the cited cases on ROSS via hyperlinks that EVA adds to the brief.
  • Find cases having similar language to the brief.

Within seconds of uploading the brief, EVA generates an analysis of all the cases, creating a list of the cases and giving each case a label saying whether it is still valid or has been overruled, criticized, questioned or superseded by a subsequent cases. In this way, you can quickly see which cases within the brief have negative subsequent treatments.

EVA generates a list of cases with negative treatments.

Click on any case in that list to be taken to the full-text case on ROSS. From the full version of the case, you can click tabs to see all positive and negative treatments of the case. You can continue to click through the cases listed as positive or negative treatments to go to their full text.

At any time when you are in a case, you can print it or save it to a folder. In addition, with one click, you generate a copy-and-paste version of the case’s Bluebook citation.

Find Similar Language

In addition to viewing the list of subsequent cases on EVA, you can also view the uploaded brief. EVA adds hyperlinks to all the case citations in the brief, so you can easily click through from the brief to the full text.

Find language similar to text in a brief or case.

As you read through the brief on EVA, you may come across a passage for which you would like to find other supporting cases or see what other cases say about that issue. To do this, highlight the language in the brief and click the option “Find Similar Language.” EVA generates a list of cases with similar language, showing the case name and the relevant snippet of text. Click on any result to go to the full case, and you are taken directly to the point in the case where the matching language is found.

Now that you’ve found this new case with similar language, you may want to know whether the full case is relevant to your issue and therefore worth reading. EVA lets you create a summary overview of the case that is targeted to your specific research query.

To create this overview, select the Generate Overview option, type your issue in plain English, and then click the button to process it. EVA generates a summary based on key text drawn from the opinion that shows you an overview of the opinion’s discussion and any holding regarding your issue. You can copy and paste this overview as a summary of the case if you wish. Users can rate these overviews with thumbs up or thumbs down to help further train the accuracy of the results.

A case overview generated by EVA.

The Similar Language function can also be used to find exact quotes in other cases. Say, for example, your brief contains a quotation from a cited case and you would like to find other cases that use that same quotation. Highlight the quotation, select Find Similar Language, and EVA will find other cases that have the exact quote and others that have substantially similar language.

Read and Check Cases

Although EVA is primarily a brief analyzer, you can use it without uploading a brief to read cases and check their subsequent history. EVA includes a search bar where you can enter the name of a case and bring up its full text. Once you do, you can see all the case’s positive and negative treatment and, if you wish, save the case to a folder or print it.

EVA’s coverage of cases includes all U.S. federal and state courts, Arruda said. While the commercial ROSS platform so far works only for limited practice areas, EVA works across all practice areas.

“With EVA, we want to provide a small taste of ROSS in a practical application, which is why we’re releasing it completely free,” Arruda said. “If we’re releasing all this for free, you can imagine what we have in our paid platform.”

“We’re deploying a completely new way of doing research with AI at its core,” he said. “It would be silly to do research or file a brief without using this product. And, because it is based on machine learning, it gets smarter every day.”

I’ll repeat that I have not used EVA, but only seen Arruda’s demonstration of it. But I’m impressed with what I’ve seen. I can’t imagine why a lawyer wouldn’t use EVA to analyze a brief, since it costs nothing to use and could potentially uncover weak citations or additional authorities.

ROSS founders Pargles Dall’Oglio, Andrew Arruda and Jimoh Ovbiagele.

The artificial-intelligence-driven legal research service ROSS Intelligence today announced an $8.7 million Series A funding round, adding to its earlier $4.3 million seed round.

The two-year-old company will use the funding to accelerate its growth, expand its product lines, increase its capacity, and attract world-class talent to its workforce, CEO Andrew Arruda told me.

This latest round was led by iNovia Capital with participation by Comcast Ventures Catalyst Fund, Y Combinator Continuity Fund, Real Ventures, Dentons’ NextLaw Labs and Apple’s deep learning lead, Nicolas Pinto.

ROSS also announced the hiring of Scott Sperling, formerly head of sales at WeWork, as vice president of sales. Sperling will lead the expansion of ROSS’s sales efforts across the United States.

In addition, Daniel Rodriguez, outgoing dean of Northwestern Pritzker School of Law, is joining ROSS as an advisor to assist with the company’s law school expansion and access to justice initiatives.

The company was founded in 2015 by Arruda, Jimoh Ovbiagele and Pargles Dall’Oglio at the University of Toronto.

“I’m proud of the talent we’re bringing on board,” Arruda said. “We’re attracting folks who’ve been at large organizations and have experience building up companies and taking them public.”

Arruda said ROSS will expand its legal research service into other practice areas, with labor and employment coming next, and also launch new product lines outside legal research.

“We started with search but we’ll be building an ecosystem around that, just like Google did,” Arruda said.

In a press release, Karam Nijjar, partner at iNovia Capital, praised ROSS as the market leader in the development of AI applications for the law.

“iNovia’s participation from the very earliest days of ROSS Intelligence, when the founders worked out of a basement of Toronto, Canada, has allowed us to help shape and guide the company as it’s grown from success to success,” Nijjar said. “We’re thrilled to be leading this Series A financing round to provide the ROSS Intelligence team with the firepower to rapidly scale their technical and sales expertise while expanding into the Fortune 500 legal market and beyond.”

 

LexisNexis is today announcing the launch of Lexis Answers, a feature that brings artificial intelligence to the Lexis Advance legal research platform. With Lexis Answers, a researcher can ask a natural-language question and get back the single-best answer in the form of what Lexis is calling a Lexis Answer Card.

LexisNexis says that Lexis Answers uses powerful machine learning, cognitive computing and advanced natural language processing technologies to deliver the single best and most authoritative answer, in addition to comprehensive but more precise search results.

“Lexis Answers is designed to help a lawyer get more complete information from a query by parsing the query to understand its intent and then delivering a precise answer to the question that’s been asked,” Jeff Pfeifer, vice president, product management, told me on Friday.

The answer is delivered in the form of an Answer Card, which both provides the answer and links to the specific text within the document that is the source of the answer. In addition, Lexis Answers suggests related topics and concepts to help the researcher expand the search.

Lexis Answers is available to all Lexis Advance subscribers at no extra cost. Users need do nothing to activate it — if the user enters a query in Lexis Advance that is suitable for Lexis Answers, the Answer Card will appear as a query result.

Parsing the User’s Intent

With today’s launch, Lexis Answers works only with questions that fall into one of five common categories: standards of review, burdens of proof, elements of claims, standard legal definitions, and core legal doctrines.

For example, the question, “What is the burden of proof for fraud in New York?” would produce an Answer Card with the specific answer, as well as standard search results and suggestions of related topics.

“Previously we would have run that query as a natural language or Boolean search,” said Pfeifer. “Now we parse the language of the query to identify the user’s intent so we can provide a specific answer.”

Users are not required to enter fully formed questions, Pfeifer said. Rather, the machine learning application parses the query and dynamically mines the underlying data set for the answer. Answers are not pre-processed and stored, but answered in real time.

“Because the user’s query is linguistically dissected as opposed to term-matched, we can present a better answer as well as related terms and concepts,” Pfeifer said. “Instead of dissecting a query, we’re understanding linguistically the intent of the query.”

The machine learning that underlies Lexis Answers has been trained using content from case law and legal dictionaries. Over time, it will be expanded to include additional content.

Related Concepts

For Lexis Answers, Lexis has constructed a Knowledge Graph – a graph of relationships and associated concepts – that helps it present recommendations of related legal concepts. (This is the “see also” section in the image at the top of this post.) In the future, the graph will display related entities, documents and other material.

“The knowledge graph grows and relationships are created over time as additional content is processed by our machine learning algorithms,” Pfeifer said.

I have not yet seen or used Lexis Answers, but as Pfiefer described it to me, it sounded similar to ROSS Intelligence, which has garnered much attention in the past year for its AI-powered legal research. Both Lexis Answers and ROSS use AI to help researchers find the “best” answer based on natural language queries. (I also have not used ROSS, although I’ve asked the company to provide me with access or a demo.) Pfiefer also hasn’t seen ROSS, but he agreed that the two research tools may be similar in that they are both based on machine learning technologies that try to map the intent of the query against a set of trained data.

More AI to Come

Lexis Answers was developed over the last 18 months at LexisNexis’s Raleigh Technology Center, where a team of data scientists, computational linguists, advanced engineering and product management professionals are developing various AI products for lawyers. LexisNexis says that it will be releasing additional AI products throughout this year and beyond.

Future development of Lexis Answers will also benefit from LexisNexis’s recent acquisition of Ravel Law, Pfeifer said. Already, the LexisNexis Raleigh team and Ravel’s development team have started working together and plans are underway to add new question types to Lexis Answers based on Ravel’s court and judge analytics.

“We’re excited about this because it really represents the beginnings of a fairly foundational transformation in the way people query large data sets,” Pfeifer said. “In the past, the focus has been on constructing Boolean searches or using the right keywords. Now, we’re at a pivotal point in interacting with large data sets. The interaction becomes more dialog-like — the interaction will be more like human interaction.”

Lexis Answers is LexisNexis’s first foray into cognitive computing, but Pfeifer said he cares less about labels such as artificial intelligence and machine learning and more about the utility being delivered to the end users.

“The end result for the attorneys should simply be better answers to their questions,” Pfeifer said. “The idea that it has to be a machine-learning application is less relevant than that the user of the machine-learning technology is delivered a better answer.”

Following my post earlier this week about the benchmark report published by Blue Hill Research that assessed the ROSS Intelligence legal research platform, I had several questions about the report and many readers contacted me with questions of their own. The author of the report, David Houlihan, principal analyst at Blue Hill, kindly agreed to answer these questions.

The study assigned researchers to four groups, one using Boolean search on either Westlaw or LexisNexis, a second using natural language search on either Westlaw or LexisNexis, a third using ROSS and Boolean search, and fourth using ROSS and natural language search. Why did none of the groups use ROSS alone?

bluehill-whitepaper@2xHoulihan: Initially, we did plan to include a “ROSS alone” group, but cut it before starting the study. We did this for two primary reasons. One: the study was relatively modest and we wanted to keep our scope manageable. Focusing on one use case (ROSS combined with another tool) was one way to do that. Two: I don’t think an examination of “ROSS alone” is particularly valuable at this time. AI-enabled research tools are in early stages of technological maturity, adoption, and use. ROSS, for example, only provides options for specialized research areas (such as bankruptcy), which means assessing it as a replacement option for Westlaw or Lexis is premature. Instead, we focused our research on the use case with the currently viable value proposition. That said, I have no doubt that there will need to be examinations of the exclusive use of AI-enabled tools over time.

The report said that you used experienced legal researchers, but it also said that they had no experience in their assigned research platforms. How is it possible for an experienced legal researcher to have no experience in Westlaw or LexisNexis? Did you have Westlaw users assigned to Lexis, and vice versa?

Houlihan: You have it. Participants were not familiar with the particular platforms that they used. They were proficient in standard research methods and techniques, but we intentionally assigned them to unfamiliar tools. So, as you say, an experienced Westlaw user could be put on LexisNexis, but not Westlaw. The goal was to minimize any special advantage that a power user might have with a system and approximate the experiences of a new user. I think readers of the report should bear that in mind. I expect different results if you were to look at the performance of the tools with users with other levels of experience. That’s another area that deserves additional investigation.

The research questions modeled real-world issues in federal bankruptcy law, but you chose researchers with minimal experience in that area of law. Why did you choose researchers with no familiarity in bankruptcy law?

Houlihan: In part, for similar reasons that we assigned tools based on lack of familiarity. We were attempting to ascertain the experiences of an established practitioner that was tackling these particular types of research problems for the first time as a base line.

Moreover, introducing participants with bankruptcy experience and knowledge adds some insidious challenges. You cannot know whether your participants’ existing knowledge is affecting the research process. You also need to figure out what experience level you do wish to use and how to ensure that all of your participants are operating at that level. Selecting participants that were unfamiliar with bankruptcy law eliminated those worries. Although, again, a comparison of the various tools at different levels of practitioner expertise would be a study I would like to see.

I would think that the bankruptcy libraries on ROSS, Westlaw and LexisNexis do not mirror each other. Given this, were researchers all working from the same data set or were they using whatever data was available on whatever platform?

David Houlihan
David Houlihan

Houlihan: Researchers were limited to searches of case law, but otherwise they were free to use the respective libraries of the tools as they found them. It strikes me as somewhat artificial to try to use identical data sets for a benchmark study like this. If we were conducting a pure technological bake-off of the search capabilities of the tools, I think that identical data sets would be the right choice. However, that’s not quite what Blue Hill is after. As a firm, we try to understand the potential business impact of a technology, based on what we can observe in real-world uses (or their close approximations). To get there, I would argue that you need to account for the inherent differences that users will encounter with the tools.

With regard to researchers’ confidence in their results, wouldn’t the use of multiple platforms always enhance confidence? In other words, if I get a result just using Lexis or get a result using both Lexis and ROSS, would the second situation provide more confidence in the result because of the confirmation of the results? And if so, would it matter if the second platform was ROSS or anything else?

Houlihan: I think that’s right, but we weren’t trying to show anything more profound. For the users of ROSS in combination with a traditional tool, we saw higher confidence and satisfaction than users of just one of those traditional tools with a great deal of consistency.

Whether it is always true that the use of two types of tools, such as Boolean and Natural Language, will yield the same response, I can’t say. We didn’t include that use case. As one of your readers rightfully pointed out, the omission is a limitation with the study. That is yet another area where more research is needed. I fear I am repeating myself too much, but the technology is new and the scope of what needs to be assessed is not trivial. It is certainly larger than what we could have hoped to cover with our one study.

For what it is worth: I wondered at the outset whether two tools would erode confidence. I still do. We tended to see fairly different sets of results returned from different tools. For example, there were a number of relevant cases that consistently appeared in the top results of one tool that did not appear as easily in another tool. To my mind, that undermines confidence, since it encourages me to ask what else I missed. That reaction was not shared by our participants, however.

With respect to the groups assigned to use ROSS and another tool, did you measure how much (or how) they used one or the other?

Houlihan: We did, but we opted to not report on it. The relative use of one tool or another varied between researchers. As a group, we did observe that participants tended to rely more on the alternative tool for the initial questions and to increase their reliance on ROSS over the course of the study. I believe we make a note about it in the report. However, we did not find that this was a sufficiently strong or significant trend to warrant any deeper commentary without more study.

(This question comes from a comment to the original post.) It appears that the Westlaw and Lexis results are combined in the “natural language” category. That causes me to wonder if one or the other exceeded ROSS in its results and they were combined to obscure that.

Houlihan: The reason we combined the tools was because we never intended to compare Westlaw v. ROSS or Lexis v. ROSS. We were interested in how our use case compared to traditional technology types used in legal research. We used both Lexis & Westlaw within each assessment group to try to get a merged view of the technology type that wasn’t overly colored by the idiosyncrasies that the particular design of a tool might bring. In fact, we debated whether to mention that Westlaw or LexisNexis tools were used in the study at all. Ultimately, we identified them as a sign that we were comparing our use case to commonly used versions of those technology types. As for how individual tools performed, all I feel with can say reliably is that we did not observe any significant variation in outcomes for different tools of the same type.

A huge thanks to David Houlihan for taking the time to answer these. The full report can be downloaded from the ROSS Intelligence website. 

oldrossheadline

I cannot remember the last time I changed a headline on a blog post or even whether I ever did. But by popular demand, I am changing the headline on my recent post about a study that compared the ROSS artificial intelligence platform against Westlaw and LexisNexis.

My headline said:

ROSS Artificial Intelligence Outperforms Westlaw and LexisNexis, Study Finds.

I have received several phone calls and emails complaining that headline is inaccurate.

They are absolutely right. So, I have now changed the headline on the original post to read:

ROSS AI Plus Wexis Outperforms Either Westlaw or LexisNexis Alone, Study Finds.

Here’s why: The study did not compare ROSS as a standalone product against Westlaw and LexisNexis. Rather, it was always ROSS plus some part of either Westlaw or LexisNexis. As I explained in my original post, researchers were divided into four groups, with each group constrained to perform the research using a particular method:

  • Boolean search, in which researchers used only Boolean keyword search capabilities of either Westlaw or LexisNexis.
    Natural language search, in which researchers used only the natural language search capabilities of either Westlaw or LexisNexis.
  • ROSS and Boolean Search, in which researchers used ROSS together with Boolean keyword search capabilities of either Westlaw or LexisNexis.
    ROSS and natural language search, in which researchers used ROSS together with natural language search capabilities of either Westlaw or LexisNexis.

My implied suggested that the study pitted ROSS alone against Westlaw and LexisNexis, when it fact it was supplementing them. So, I have now made this change.

What have been 2015’s most important developments in legal technology? For the past two years, I’ve posted my picks of the top developments in legal tech (2014, 2013). With another year under our belts, it’s time to look back at 2015.

What follows are my picks for the year’s most important legal technology developments. As in past years, the numbers are not meant to be rankings — all of these are important in their own ways. I also refer you back to my prior years’ posts, as much of what I said in them remains true today.

1. Case Law Gets Democratized.

democritizationTo my mind, the biggest legal technology story of the year was the joint announcement by Harvard Law School and Ravel Law of their Free the Law project to digitize and make available to the public for free Harvard’s entire collection of U.S. case law – said to be the most comprehensive and authoritative database of American law and cases available anywhere outside the Library of Congress. As someone who has covered legal and information technology for more than two decades, this was a day I’d long hoped would arrive. Harvard’s vice dean for library and information resources, Jonathan Zittrain, summed up the significance better than I could when he said: “Libraries were founded as an engine for the democratization of knowledge, and the digitization of Harvard Law School’s collection of U.S. case law is a tremendous step forward in making legal information open and easily accessible to the public.”

This news comes at a time when legal-research start-ups continue to develop innovative ways to access and contextualize legal research. Ravel Law is one example, with its visualization tools that show the connections and relationships among cases. Casetext is another, which just this year introduced such innovations as its crowdsourced citator and its LegalPad writing and publishing tool. Developments such as these are helping to realize a long-held vision of the Internet – that it will make the law more accessible and comprehensible to everyone.

2. Analytics Take Center Stage.

Drury_Lane_1674The second-biggest legal technology story of 2015 was the acquisition of Lex Machina by LexisNexis. It was a significant deal in itself, but even more so for what it signals about the direction in which legal technology is headed. Why would one of the world’s most-established legal information and technology companies want this small, six-year-old Silicon Valley start-up? The answer, in a word: Analytics. Lex Machina has developed and refined sophisticated analytics that open new windows into court data. It takes data from the federal courts’ PACER system – dockets, court filings, orders – and lets users extract information, patterns and trends that would otherwise be invisible. It provides insights into lawyers, law firms, litigants, judges and courts that inform decision making and strategy.

So far, Lex Machina has done this for only intellectual property law, but that is just the tip of the iceberg. And there is no reason to confine such analytics to court data. There are troves of freely available government information that could harbor all sorts of invaluable information for legal professionals. Even beyond government data, analytics are already being used by lawyers in e-discovery, budgeting, fee negotiations, settlement negotiations, and a host of other applications. Lex Machina is by no means the only legal company in this space – PacerPro recently launched a new analytics tool called Litigant Profiling and Ravel Law offers its Judge Analytics – but its acquisition underscores the growing significance of data analytics in law.

3. The Duty of Technology Competence Goes Wide.

heads-in-the-sandIn 2012, when the American Bar Association formally approved a change to Rule 1.1 of the Model Rules of Professional Conduct to make clear that lawyers have a duty to be competent not only in the law and its practice, but also in technology, I described it as a sea change. But the Model Rules are merely models. Unless and until they are adopted by the states, they have no binding effect on lawyers. It is significant, therefore, that 20 states have now adopted what I call the duty of technology competence.  In 2015 alone, the Model Rule was adopted in nine states and became effective in another two that had adopted it late in 2014.

Why does this matter? Because there is no more hiding from technology. You can no longer competently practice law without at least a rudimentary understanding of technology, the Internet and social media. You need to know enough to recognize what you don’t know and to withdraw or bring on help when circumstances warrant. It is safe to say that, a year from now, the majority of states will have adopted the duty of technology competence. Even in states that do not, courts are increasingly signaling their impatience with lawyers who lack basic technology skills. There can be no more Luddites in law.

4. Technology-Assisted Review Becomes Mainstream.

computerman2It was less than four years ago that U.S. Magistrate Judge Andrew J. Peck issued the first-ever court decision to approve the use of technology-assisted review in e-discovery. It has been barely six years since the terms “technology-assisted review” and “predictive coding” first began to see use within the legal profession’s vernacular. Yet this year, Judge Peck issued another TAR decision in which he declared TAR’s use to be so widely accepted by judges that it is now “black letter law.” Whereas lawyers were initially reluctant to use TAR for fear of inconsistent results or judicial rejection, 2015 was the year in which TAR took root in the mainstream of legal technology.

The reasons for this were both practical and scientific. As its name suggests, TAR uses technology to assist in the process of reviewing electronic documents for discovery. “Assist” is a wimpy word here, because TAR can dramatically reduce the numbers of documents lawyers ever have to set their eyes on – and therefore dramatically reduce both the time and cost of discovery. With cases today sometimes requiring review of millions of documents, TAR’s impact can be huge. Those savings in time and cost form the practical argument for its use. On top of that, scientific evidence supports its effectiveness. The seminal study on this, published in 2011, showed that TAR was not only more effective than human review at finding relevant documents, but also much cheaper – producing at least a 50-fold savings in cost over manual review. Subsequent studies have reinforced these findings and shown that a particular TAR protocol called continuous active learning is superior to other forms of TAR.

This was the year in which these factors – judicial approval, practical benefits and scientific evidence – gelled and made TAR a mainstream technology.

5. Artificial Intelligence Comes to Legal Research.

Efile RobotIn the early days of 2015, a team of students at the University of Toronto created a start-up to bring artificial intelligence to legal research. Using IBM’s Watson – the computer most famous for winning Jeopardy! In 2011 – as their platform, they launched ROSS Intelligence, an AI system that they say can answer lawyers’ natural-language legal-research questions, such as, “Can a bankrupt company still conduct business?”

Initially, ROSS “learned” only a small subset of Canadian law. But in July, after receiving funding from Y Combinator, its developers moved, at least temporarily, to Silicon Valley and set their sights on the much-larger U.S. market. In addition, the global law firm Dentons announced that it was making an undisclosed investment in the company. ROSS’s developers say it can provide a lawyer with a highly relevant answer to a legal research question posed in natural language. The more it does, the more it learns and the better it gets. It can also monitor legal developments for changes that can affect your case, instead of requiring you to monitor a torrent of legal news.

Will ROSS and its progeny someday replace lawyers for legal research? I have no doubt it will at least someday become an integral tool in law practice. Whatever the future of AI in law, 2015 can be recorded as the year it got started.

6. Podcasts Enjoy a Resurgence.

1200px-Broadcasting_a_radio_play_at_NBC_studioIn a post last March, I wrote about The Rise and Fall and Rise Again of Legal Podcasts. “They were the next big thing. Then they weren’t. And now they are again,” I said, looking back over the 10 years I’ve had my own podcast, Lawyer 2 Lawyer, on the Legal Talk Network. If podcasts were looking hot earlier this year, when New York magazine proclaimed the Great Podcast Renaissance, they have only gotten hotter as the year has progressed, to the point where the Nieman Journalism Lab is predicting that podcasting is about to explode.

I’m not sure I’d use the word “explode” to describe podcasts in the legal sector, but they are clearly taking off. In fact, for the 12th edition of his annual Blawggie Awards last week, Dennis Kennedy decided not to talk at all about blogs and focus exclusively on podcasts.  Once a regular blogger, Kennedy wrote that he now sees his podcast, the Kennedy-Mighell Report, “as the primary outlet for what he was once writing on my blog.” At the Legal Talk Network, which hosts my podcast, there are now a variety of podcasts on a range of topics, including podcasts from both the ABA Journal and the blog Above the Law. In the post from March that I referenced above, I listed a number of legal podcasts that had launched just since the start of 2015. If you do not already listen to legal podcasts, it is a good time to start.

7. Microsoft Re-Surfaces.

Surface-Pro-4Microsoft products have long dominated the legal profession. More than 90 percent of lawyers use some version of the Windows operating system and most lawyers use Microsoft Office for word processing and email. So I do not mean to suggest that Microsoft was in any way losing its footing among lawyers. However, there were indications that the tide was slowly turning against it. Windows 8 was so unpopular that lawyers clung to their earlier Windows 7 or even Windows XP operating systems (until Microsoft this year finally pulled the plug on XP). And as the widespread popularity of iPhones and iPads introduced lawyers to the iOS environment, some lawyers were beginning to migrate their entire offices to Apple systems.

But two developments this year brought Microsoft back into its long-favored status. One was the official release in July of Windows 10. It was hugely successful compared to past OS roll-outs. Not only were there no major glitches, but virtually everyone seemed to love the new architecture. Windows 10 took the best of Windows 7 and 8 and achieved and achieved a truly better, faster and more modern operating system.

The other development was the surprising (to me, anyway) popularity of the Microsoft Surface line of tablets/PCs among lawyers. It is being adopted by lawyers in a range of practices, from a local prosecutor’s office to one of the world’s largest law firms, Clifford Chance, which is buying the Surface Pro 4 for all its lawyers. And it is getting good reviews from sources such as Daniel Siegel in Law Practice magazine and Tom Mighell and Dennis Kennedy in their Kennedy-Mighell Report podcast.

On top of those developments, Microsoft has been helping third-party vendors to develop legal-specific applications for its Office 365, such as the LawToolBox court deadlines app, and its communications platform formerly known as Lync (now Skype for Business) has proven popular with law firms. All in all, for Microsoft in legal, it’s been a good year.

8. The Legal Industry Gets an IPO.

ipoFor all the activity in recent years in legal technology innovation and start-ups, there has been a dearth of legal technology IPOs. That changed this year with the IPO in June of AppFolio, the Goleta, Calif.-based company that owns MyCase, the cloud-based practice-management platform, in which it raised some $74 million. I could not think of a major IPO in the legal industry in recent memory until my friend Sean Doherty, writing at Above the Law, mentioned the 2013 IPO of e-discovery and litigation support company UBIC.

Of course, even AppFolio has only a partial connection to the legal industry. Its core business is a cloud-based property-management platform for residential and commercial property managers. With its IPO, it planned to expand that business into other industry verticals. The IPO was expected to have little impact on the MyCase part of the business, Jason Randall, executive vice president of AppFolio, told me earlier this year. “We were marching on a mission before the IPO and we’re marching on the same mission after the IPO, which is giving our customers a great product to use. We’ve been heavily investing in that since day one and that hasn’t changed.”

Still, it signifies the strength of the legal market for investment. As Randall put it when I spoke with him: “We believe in this market. By buying MyCase to begin with and heavily investing in it, as we have and will continue to do, it shows our confidence that this is a great market to be in and that it is one we are committed to.”

9. Legal Blogging Hits a Plateau.

colorado-plateauAs was the case with Mark Twain 118 years ago, reports of blogging’s death have been greatly exaggerated.  But even if blogging isn’t dead, it may have hit a plateau. According to the ABA’s 2015 Legal Technology Survey Report, growth in blogging among lawyers is virtually stagnant. Overall, the percentages of law firms with blogs and of lawyers who personally blog have remained largely unchanged over the past three years. Similar findings were reported by the 2015 Am Law 200 Blog Benchmark Report, a report on large law firm blogs prepared by the blog company LexBlog. While it reported a significant increase in large-firm blogs since 2007, it found only minor growth in the last three years.

However, it would be a mistake to equate the number of blogs with the importance of blogs. As I told the ABA Journal recently and wrote here earlier this year, I see blogs as more important than ever within the legal industry. I’ve cited the prominence and influence of SCOTUSblog so many times that they’re getting sick of hearing me mention them. But ask yourself where lawyers are turning for information. In increasing numbers, they are turning to blogs. Yes, some blogs are dying. But others are thriving. And meanwhile, established legal news publishers are shuttering publications or rolling through owners. Blogging will continue to evolve in the years ahead, but it’s not going away anytime soon.

10. Practice Management Continues to Expand.

practicemanagement2You would think that I would be done talking about practice management. After all, in my 2013 post, I wrote it was the year in which practice management “went mainstream,” thanks to the growing crop of sophisticated and established cloud-based practice management platforms. Then last year, I again included the continued growth in the use of practice management applications as a major development, noting in particular that these applications were evolving from maintaining a narrow focus on simple practice management to becoming something wider — providing a variety of integrated tools and services that address an array of functions within a law office.

But here I am again, continuing to be amazed at how this segment continues to thrive and evolve. We still have new platforms raising financing and getting launched, such as PracticePanther; we still see the more established platforms continuing to build out their products, and we even saw the parent company of one platform complete a successful IPO (see #8 above). And then came the recent news from Microsoft that it had decided to open-source its practice management platform, Matter Center for Office 365. Practice management is not a sexy topic, but it continues to be one of the hottest – if not the hottest – areas of technology growth and development in the legal industry.