Data Driven Off Course

May 1, 2021
page2comm

Is an Overreliance on Performance Metrics Steering Your Firm in the Wrong Direction?

The last few years have offered any number of object lessons in how different people, faced with the same information, can come to radically different, even opposing, conclusions about what that information means. Still, as leaders, we frame our decisions as being “data driven” or “evidence based” as a way of signaling that they should be beyond dispute.

The legal press is full of stories about how law firms are finally catching up to other businesses in using sophisticated metrics to evaluate their performance, gathering and analyzing data on attorney productivity and utilization, realization and discount rates, efficiency of collections, and overall degree of leverage. Legal industry consultants and a growing class of non-lawyer executives, including CFOs and COOs, are pushing managing partners and compensation committees to take profitability into account as they make key financial decisions.

Certainly this more disciplined approach to management is a substantial improvement over the “gut feel” decision-making that once steered firm leadership. But having a lot of numbers in front of you isn’t the same thing as having a deep understanding of how things are going at your firm.

Misreading Moneyball

An automated and Internet-connected world provides us with vast amounts of data, and the ability to analyze that data quickly and in multiple dimensions. When deployed effectively, the information coming from this analysis pushes us toward efficiency. In sports, the “Moneyball” philosophy — finding value by pinpointing specific factors that drive success — enables small-market teams to outperform their wealthier, large-market rivals by assembling the right combination of undervalued talent to execute a winning strategy. It’s no surprise, then, that dozens of “Moneyball Your Business” and “Moneyball for Lawyers” e-books and courses are for sale online.

At its best, being data driven means moving away from the assumptions of the past — and all the implicit biases that came with them. In the book that introduced the term “Moneyball” to the popular imagination, business writer Michael Lewis tells the story of Oakland A’s general manager Billy Beane and his efforts to build a winning baseball team on a limited budget. Beane himself — played by Brad Pitt in the movie adaptation of Lewis’ bestseller — looked like a Major League Baseball player and, indeed, had once been a top prospect to play professionally. But Beane’s playing career never took off and, in running the Athletics’ back office, he came to understand that players like him who looked the part didn’t always perform as expected. Guys with lean, athletic builds and quick strides often under-delivered on their supposed potential, while other players who didn’t have the same “central casting” appeal — such as the pudgy catcher and the pitcher with a weird side-armed delivery Beane went on to recruit — were regularly passed over and their contributions to team success undervalued or ignored. By choosing players based on specific statistical measures of their past performance rather than more general assumptions about their strengths and abilities, Beane discovered and exploited a novel way to predict which players would most effectively contribute to the team’s success.

Similarly, in law firms, generations of managing partners held on to the notion that you could determine who your most productive associates were simply by walking around to see who was still in the office at 6:30 pm. As it turns out, of course, billing data captured by time tracking systems in financial software packages tells a far more accurate story about who is doing the most work.

This increased sophistication around data and performance metrics is generally good news. Having real-time visibility via your time and billing system into utilization, for example, is a great way to get a high-level sense of how well the firm is doing. You should know what the denominator is — the “absolute capacity” of the practice, office or group you lead — and, in general terms, if you’re hitting 70 to 75% of that number you’re doing pretty well. At an individual level, though, the question is more nuanced. The attorney at 90% utilization right now — working flat out on a massive case about to go to trial — is certainly “busier” than the one at 60% utilization, billing time to a number of relatively slow-moving matters. But that’s just a snapshot of this particular moment. Once the trial’s over, will Ms. 90% have other work to do? And, even if she does, will her burnout from the trial drop her utilization to 20% for the next few months? On the other hand, has Mr. 60% been putting non-billable hours into business development that will yield significant work in the future? Is he serving the firm in another, non-billable way, by leading a committee or mentoring summer associates?

Data Due Diligence and Due Process

A still-unfolding controversy at one of the nation’s top medical schools points to the potential hazards of being too certain about what digitally harvested data is really telling us. Earlier this year, 17 students at Dartmouth’s Geisel School of Medicine stood accused of cheating on computer-administered tests during the pandemic, based on seemingly indisputable evidence that they looked up course material online while taking exams. Though school officials are said to have advised the students to plead guilty to the obvious violation of the school’s honor code, subsequent investigations, detailed in the New York Times, showed that “automated … processes are likely to have created the data that was seen rather than deliberate activity by the user … Course pages in the school’s learning management system can automatically generate activity data even when no one is looking at them.”

College administrators thought they had data about what their students were doing during remote exams. Instead, they had data about what their students’ computers were doing — with or without the students’ knowledge. Having failed to complete their due diligence in understanding what the tool was actually measuring, the administration also failed to set up appropriate processes (such as warning students to shut down software running on devices they weren’t using to take the test) to improve the quality of the data they collected or to ensure that the conclusions they drew from it, and the actions they took based on those conclusions, actually made sense. In the end, the high-tech “data-driven” system for rooting out cheating was no better — and possibly significantly worse — than relying on a professor’s or proctor’s gut feel about which students’ test scores were out of line with their general performance.

Similarly, performance metrics, as captured and reported out by billing systems and other enterprise tools, capture what is happening at your firm but not how or why it’s happening. Is that spike in hours a sign that your department is busy and thriving? Or is it the last gasp of a burned-out team that’s ready to quit as soon as they have their bonus checks in hand?

Basing your decisions on data can be a way to remove bias and blind spots from the choices you make as a manager. But relying too heavily on quantitative measures without a deep understanding of how the numbers are generated or robust processes for contextualizing those numbers within the current environment can be problematic too. When we privilege “what the data says” over what people say in real-life conversations, we can miss out on vital information. We might not have a formal measure of team burnout on our data dashboards, for example, but that doesn’t mean we shouldn’t be taking it into account when we make decisions about staffing, recruiting and compensation and benefit packages. And, when we make critical decisions — especially in hiring, firing and promotions — based on data, we owe our colleagues transparency around where that data came from and an opportunity to provide additional detail related to their own work.

What We Measure

ALM Publications’ Am Law 100 and 200 data is the legal industry’s north star for measuring law firm performance. While firms once awaited each year’s release of the Am Law 100 and 200 lists just to see who would outrank who in terms of gross revenue, in recent years, firm leaders and industry experts have taken a more “Moneyball”-inspired approach to the list, focusing not just on the absolute numbers but on the calculated metrics that ALM’s reporters and analysts apply to them.

For each calendar year, firms report their gross revenue (total fee income), net income (total compensation given to equity partners) and attorney headcount numbers (broken down into equity partners, non-equity partners and all other lawyers). From those basic financials, ALM calculates a number of metrics, including:

  • Profits per partner: net income divided by the number of equity partners
  • Revenue per lawyer: gross revenue divided by the total number of lawyers
  • Profitability index: profits per partner divided by revenue per lawyer

(Read a more detailed explanation of the Am Law ranking process from ALM managing editor Ben Seal here.)

ALM, Seal notes, considers revenue per lawyer to be the best measure of a firm’s overall financial health. Others in the industry, including many of the firms themselves, see the profitability index as the ultimate marker of firm success, in part because it’s the closest analogue to the performance measures many corporate clients apply to their own businesses. A thriving enterprise efficiently converts revenues into profits.

Profitability is, of course, an important benchmark, and its inclusion as a criterion for how firms understand and evaluate their own performance is long overdue. But, as some industry leaders have pointed out, the drive for profitability — or, rather, the appearance of profitability as defined by the ALM profitability index metric — creates a set of perverse incentives that can ultimately hinder a firm’s performance and growth.

Consider the factors that go into calculating the profitability index. First, there’s net income divided by the number of equity partners. The “best” number here would come from the largest possible income split between the smallest possible number of partners. In an environment of continued economic uncertainty, most firms face some pressure from clients to keep rates steady or minimize increases. So boosting that income number higher generally means billing more hours even when clients might benefit from speedier resolution to their cases. At the same time, limiting the size of a firm’s equity partnership could mean losing or alienating future rainmakers. The same dynamic holds true for the elements of the second part of the profitability index equation, gross revenue divided by the total number of lawyers. To get the “best” number, firms should have a relatively small number of full-time attorneys billing a large number of hours, even if they’re doing so at the expense of non-billable work that could be valuable in the long term, such as business development or recruiting.

Being profitable is good. And using profitability data to inform strategic decisions is smart. If two partners each have $5 million books of business, but require far different levels of resources and support, it makes good sense for a firm to look closely at the true costs associated with each one and to reward the partner whose work brings in more revenue at less cost. There’s danger, though, in overapplying the lessons learned from understanding profitability at a granular level to overall firm management. It simplifies decisions that are, necessarily, very complex.

“Not every decision can be based on profitability alone,” counseled Ron Safer, founding partner of Riley Safer Holmes & Cancila, in a widely circulated issue of ALM’s Mid-Market Report. “Perhaps a firm offered a different rate structure for a new client in order to win their business going forward. Maybe the firm is trying to get new lawyers exposure to a specific area or is tackling a matter that’s in the public’s interest. There are many reasons why a matter might not be particularly profitable and still be a tremendous success.”

While every choice you make should be informed by data, not every decision should be driven solely by how it moves the needles on an ALM-defined dashboard.

Getting Smarter

An increased emphasis on digital technology and the necessary embrace of big change has firms emerging from the crises of the past year with a renewed desire to be smarter about the data we measure and how we communicate what we learn from that data. Our connected devices are generating mountains of information about what we produce and consume, how and when we work, and what moves us to act. Because it’s generated and quantified by machines, we tend to think of this data as purely objective and neutral. That may be true about raw data (though, even there, decisions made by the human programmers who design the machines are subject to bias). But in our reliance on this data and our analysis of it to shape our decision-making process, we still apply all kinds of subjective and unconscious filters.

As you consider how to take a more data-driven approach to managing your practice, begin from your own first principles: How will you define success? What’s most important to the people you lead and the clients you serve?

Then consider the metrics that could help you track your performance in those areas. Some will be easily quantifiable, while others might be more subtle.

If you’re leading a firm in growth mode, you might have particular goals about the markets you want to serve and the scale at which you can best achieve those results. You’ll count the clients you’ve added, the attorneys and staff you’ve hired and the collective value of your book of business. By contrast, if you’re charged with carrying on the legacy of a firm with an established reputation for excellence, you might be more concerned with your effectiveness at retaining top talent and expanding the amount of work you do for certain key clients.

If you’re a deal-maker and your real estate developer clients are looking to buy more land and build more buildings, you should be counting the number and value of the transactions you close for them that make this happen. If you’re a patent litigator pursuing claims for pharmaceutical companies, how often are you succeeding on their behalf and at what point in the process?

As you gather data in the areas that are most important for your goals, spend time interrogating the information you’re looking at:

  • Was it easy to find?
  • How was it collected?
  • Can you track it over time in order to see trends?
  • What might it leave out?
  • What follow-up questions should you be asking to be sure you’re drawing valid conclusions from it?
  • If you shared this data publicly, how would it be received and understood?
  • If you’ve struggled to get clear data, what does that tell you about the systems your firm has in place?

Though this approach might sound intuitive, in practice many firms take the opposite tack. They measure things that are easiest to measure — hours as entered into the billing system; visitors to the firm website; revenues collected in a given month — rather than the things that are most important to understand. They find themselves with thousands of data points, but struggle to see the point of the data.

In this confusion, it’s easy to turn back to the “industry standard” metrics of annual profits per partner or revenue per lawyer since those numbers, at least, help you keep score on how you’re doing relative to other firms. To do so, though, would be to miss out on the rich possibilities of deeper analysis. The real lesson of Moneyball, after all, was the importance of looking at metrics the other teams weren’t tracking.