The Lifecycle of an Algorithm: How to Build AI That Actually Works

By Abhishek Mittal, Vice President, Data Analytics and Operational Excellence, Wolters Kluwer’s Governance, Risk & Compliance division

Artificial intelligence is quickly going from pipedream to reality. In a recent McKinsey survey, over half of respondents reported at least one current use case for AI. As organizations look to ramp up their adoption, it’s likely they will begin building more AI models in-house. This is a worthy endeavor—but it’s also a challenging one.

The team I lead specializes in developing technology for corporate law departments (CLDs).

About five years ago, we were increasingly hearing CLDs say they were buried under piles of literal and digital paperwork, causing invoices to not receive the attention they deserved. This was leading to lost efficiency and higher costs.

So, we embarked on a journey to train AI to review legal invoices for both accuracy and compliance. Five years later, that AI model has grown to support CLDs in companies across 20 industries, with more than 600 unique sets of billing guidelines and 225 distinct practice areas.

The past half-decade has been full of important learning moments. My team and I have seen firsthand what it takes to build smart, functional, impactful models. Here are four lessons we’ve learned from our AI journey to help you do the same.

PLAN

Before you build anything, get data scientists and management team members together to consider which processes can be improved significantly through use of AI. AI is an innovative technology, but building an AI system should be an exercise in practicality. It is not a panacea for every problem.

To decide which process to tackle first, follow the data. Assess the data you currently have and the state that it’s in. Irrelevant or unstructured data will create problems for your AI program, so you need to ensure that your data is of high quality and relevant to what you’re trying to achieve. If not, consider how much work it would take for your team to get that data into the right shape to be leveraged by an AI tool.

In our case, we already had two substantial databases of legal spend from a wide range of industries and companies. All this information had already been cleansed, so we knew we were pulling from a quality data source that could fuel a robust AI algorithm. We secured customer permission to aggregate and use that data to provide new services. These things combined to put us in an excellent position to build our model without significant preparation work.

COMMUNICATE

No matter the AI use case, there will be change to manage. Be prepared to help users understand the benefits of the changes you’re making. AI is a dramatic shift. From day one, you need to make everyone comfortable with it. AI is not here to take anyone’s jobs. It’s here to make their jobs easier and better. If you don’t build an understanding of this from the start, your project is more likely to fail.

If the use of AI impacts outside partners, change management will be needed with this group as well. Our clients’ partners are law firms, and we did not want this innovation to be perceived as second-guessing their work. To avoid friction, we offered law firm training and an open line of dialogue to address questions. The experience of using the new AI-enabled tool was as important to us as its technical soundness. But we could only gauge that metric by communicating constantly and thoroughly.

TRAIN

The beauty of AI is that it gets smarter over time. Use machine learning algorithms to train your model. As your system ingests more data, its algorithms will get smarter—and will offer more intelligent and accurate recommendations as a result.

But, the algorithm is only there to help. A skilled human still must analyze the recommendations made and make decisions based on them. A feedback loop provides additional data for your human team to consider. This can also be fed to the model so it will learn how to accommodate grey-area scenarios and understand context clues in the future.

For example, block billing—where lawyers aggregate smaller tasks into a single invoice line item—is logical in some contexts, but insufficient in others. AI does not always know when it’s acceptable to bill in blocks and in which scenarios a CLD will require more detailed invoicing. We have chosen to check in with clients every week to compare the model’s work to that of a human reviewer. This provides the algorithm with constant human feedback and, thus, allows it to constantly become smarter. It also helps CLDs feel confident in the outcomes of the process.

Keep in mind as well, a combination of volume and diversity of data is key to developing a strong model. Consider pulling in trusted outside information sources to complement your own data as needed.

SCALE

Building an AI model starts with choosing a process for which you can add incremental value. The goal is to continue adding more and more value with time. Our models have taught us about industry trends and best practices, for instance. Now, the model can help clients benchmark their needs, meet those needs, and improve their billing guidelines.

But scale should never come at the cost of quality. An independent team audits our AI models once a year. The audit is a seven-step process that covers everything from data collection to model-building. While we have a lot of statistical processes built into our models for quality assurance, having a framework that’s managed by an external team helps us and our customers feel completely certain that our model is accurate and compliant.

While there is no shortcut for building an effective AI model, there is a blueprint. If you plan, communicate, train, and scale your model with care, and use internal and external resources to test its continued efficacy, you’ll be celebrating its successful fifth anniversary in no time.