This morning as I prepared to write this, a headline flashed across my screen that an exploration company that is not part of the search team for the missing Malaysian airliner claims to have located wreckage that might be the plane. The problem is that it is 3,000 miles north of the “official” search zone. I completely understand that this could be an attention grab by the search company (which is why I am not naming it here), but I also understand that the search company is saying very specific things that would be simple to disprove, such as their data indicates that the “wreckage” they found was not there 3 days before the Malaysian airliner went missing. But it is the reaction from the Joint Agency Coordination Centre that was most compelling for me. The JACC said that the search grid is based on satellite and other data that led to the development of those search arcs, and the location indicated is not on the arc.
Color me unimpressed with that explanation. This is a massive undertaking, and with more than 2/3 of the search area covered, there is still no sign of the aircraft. So I get it. We can’t waste manpower and resources chasing every lead. But this is not Courtney Love claiming to have randomly found the wreckage. These are presumably scientists, with data.
When developing portfolio-based operations for the delivery of legal services, it seems obvious that decisions should be based on data, rather than supposition or anecdotal experience. The problems with actually performing that way, however, are myriad.
First, if you are developing a program for a prospective client (or even for a current client), there may not be any data, or at least certainly not enough upon which to base an entire operation. For a prospective client, the client will often not have the data, or it will be unwilling to share it. For example, I have worked with several of my colleagues on Requests for Proposals related to alternative fee and value-based billing arrangements. Many of those clients will provide vague references to data, or give such a wide range that it renders the data almost useless. One of my favorites was, “Last year’s legal spend on fees and expenses was $11.8 million.” (That number is made up.) When pressed, the client could not identify the percentage of that figure that was devoted to expenses or give any guidance on what kind of matters were included in the figure. The client thought its request was rather simple. Reduce that number by a significant percentage. And while that goal is simple, it is not terribly helpful as to how to accomplish it.
So if there is little to no workable data, we make educated estimates based on experience, workload for other clients in similar industries, etc. But for those of us who have converted part of our practices into these types of arrangements, we can tell you that even the firm-side data has challenges. If the data is exclusively based on income or profit margin, you’re going to have a hard time converting that into useable information. Workloads are beneficial, but only if you can confirm that there is some reasonable level of efficiency.
By way of example, if your information is, “we had 25 matters in 2013 for a similar client, and we billed $625,000 for the work,” you have really very little information to work from. That’s $25,000 per file. Is that 100 hours, or 60? Were those 25 files all assigned to a single attorney? If not, is there any data as to the fluctuation of hours between the attorneys internally. Did the associate with 13 of those files bill an average of $14,000, while the associate who had 12 billed $37,000 per file? In short, the deeper the data, the better the estimate.
Beyond the above, one of the more challenging aspects to running a program like this is to consistently revise the process based on the data. I wrote in an earlier post about a “Kaizen” approach, which is a process-based approach to refining your process. There are factors at work that interfere with this approach. The work needs to be done, and so tinkering with the process will always tend to move to the back burner. In addition, if you’re doing it right, there is a feeling that you are never done. There is always a portion of the process that is under construction, and so the sense of accomplishment can be hard to come by. Finally, once the project begins, there is not necessarily an obvious, appropriate trigger for re-management of the process. As a consequence, it requires stamina to maintain pace, and it may require identification of invented milestones that will require those responsible for the process to trigger an analysis of the DATA, and re-engineering based on the data.
But the analysis of the data is the key. In the late 1970s, two things were happening at the same time, although as it turns out, not in tandem. There was an increase in vaccinations, particularly using Thimerosal, and there was also an increase (to the tune of nearly 300%) of diagnoses of childhood autism. Then in 1998, Dr. Andrew Wakefield published a study that claimed to establish the link between the MMR vaccine and the development of autism. The problem is, according to the British Medical Journal, the paper was an “elaborate fraud” in which Dr. Wakefield altered the data of every, single patient who formed the basis of the study. The 1998 study has been withdrawn and all but universally discredited. But the movement remains. The end result? Long-dormant diseases like mumps and measles are enjoying near-epidemic resurgences.
There was data in 1998 to support a long-standing fear held by people with no data, relying instead on a loose, temporal association. When the data changed or was discredited, those people failed to change their understanding with it. Their minds were made up, and they were not re-visiting those decisions when faced with new data. Your value-based billing or portfolio-based project can die on the vine if you are not agile enough to alter your process to accommodate the data.