A methodology to develop high quality and trustworthy AI

How can an enterprise operationalize AI in a way that is repeatable, sustainable, and, perhaps most important, trusted? Regions Bank faced this challenge when our advanced analytics practice too often relied on siloed data sets, development teams working in isolation, and disparate and somewhat inconsistent development methods.

Working with IBM, we’ve transformed advanced analytics using modern tools and new open and transparent methodologies. After creating an analytics Center of Excellence, we’ve brought data into a centralized environment, applied more machine learning and AI techniques, and, above all, adopted an end-to-end business value approach that includes AI quality control.

The result has been trusted analytical solutions that help reduce risk, detect fraud, assist commercial customers, and provide insights into consumers so we can better meet their needs.

A repeatable development process for data products

We call such solutions data products, and we build them using an agile, repeatable approach that brings disciplines from software engineering to data management and advanced analytics.

After we identify a problem, we devise a business case for solving it. Next, we lay out a roadmap and assemble a multidisciplinary development team. The team applies agile methods to ingest relevant data, build the analytical model and user interface (UI), roll out the solution, evangelize adoption, assess the impact, and monitor and measure performance of the data and the model.

Creating a data product is never a one-and-done effort, however. The team keeps improving it by working through a backlog of features, fixes, maintenance items and new releases.

Trust requires strong AI ethics and governance

Still, we need to trust that our AI models are fair and ethical. Regions prides itself in open and trusted customer relationships. Our values involve doing the right thing and improving the lives of our communities, customers and associates. That’s why we bake AI ethics oversight into our development methodology.

Ethical AI requires data completeness, accuracy and quality, and the data underlying the models must be representative of the data used to make the decisions. Plus, the models must be “explainable,” meaning their decision making process is clear.

To keep our solutions on point, a variety of stakeholders provide oversight into the data and the transparency of our models. An internal oversight team helps guarantee the fairness, safety and soundness of our solutions, as do risk management and audit partners and government regulators.

Synergy and alignment with data tools and personnel

We also work closely with IBM Data and AI Expert Labs and the IBM Data Science and AI Elite team, with whom we’ve found synergy around technology and personnel. With IBM, we’ve had productive outcomes from this elite group collaborating on a common approach.

Want to put trustworthy AI principles into practice?

We agree on the need for multifunction tools such as IBM Cloud Pak for Data that help us analyze data, assess data drift, measure model performance and keep our personnel informed. And the IBM team is aligned with our end-to-end approach to the AI lifecycle.

Often, we’re a step ahead of partners that favor legacy methodologies. But IBM understands that we’re not just building a model, we’re building a product, and many different sequences are needed to get from the business problem to measuring the solution’s impact—and then repeating the cycle.

IBM’s support for agile processes assists with AI oversight. If we can get our oversight partners onboard early to understand the business case and requirements, along with the code and the developers’ intent throughout every sprint, they can provide faster feedback. This accelerates the development of the high-quality, trustworthy AI solutions that Regions Bank strives for.