2

The enemies of sustainable AI: Concept drift, data drift and algorithm drift

 1 year ago
source link: https://venturebeat.com/ai/the-enemies-of-sustainable-ai-concept-drift-data-drift-and-algorithm-drift/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Sponsored

The enemies of sustainable AI: Concept drift, data drift and algorithm drift

gettyimages-1416929398-170667a.jpg?fit=588%2C294&strip=all

Presented by Blue.cloud


Back in 2019, Gartner predicted that the vast majority of AI projects would continue to fail: Only 53% of projects make it from prototypes to production, and 85% of those blow up. And that’s more or less the state of the industry today. And yet, AI adoption has only accelerated. In an IBM study, 42% organizations reported they’re exploring AI, and AI adoption is growing steadily, up four points from 2021.

“Very few AI products become successful in creating value for companies, even though companies invest quite a lot of manpower and resources,” says Ali Riza Kuyucu, global head of data and analytics at Blue.cloud. “But driving efficiencies through artificial intelligence requires constant monitoring and improvement, or what we call continuous AI — keeping and sustaining the business value of AI for an organization over a longer period.”

It’s more than just an iterative approach to AI, he adds. Iterative means continuously optimizing the project itself, as results emerge. The problem comes when a good result from one use case prompts a team to jump to another use case, without considering the sustainability of those predictive capabilities. Continuous AI is about continuously monitoring drift in a model and trying to prevent it — and if necessary, retraining or rebuilding the models to operationalize them, and ensure their predictive capabilities remain on target for your specific situation.

“Continuous AI deals with challenges like the loss of efficiency in the incubation phase by factors like changes in source data and data reliability as well as changing economic and business conditions,” says Zavier Rodriguez, chief technology officer at Ideal Agent, a Blue.cloud customer. “The approach is crucial to keep the value of the AI models by maintaining their predictable power and by providing the necessary technical architecture.”

This is a tremendous challenge, and it’s often invisible. The business problem or issue a model is addressing is rarely, if ever, static. When a model is trained, data scientists are essentially referencing a single snapshot in time — the situation as it stands. But those conditions inevitably drift over time.

The consequences of model drift

Most of the time AI is about predicting a variable, Kuyucu says, whether that’s fraud, churn, attrition, customer behavior and so on. When the context starts to alter from the original state of affairs, those predictions become less and less accurate. You might start at 80% accuracy, but soon start seeing that number drop as the model begins to drift.

The two major potential missteps — a false negative and a false positive — can have wide-reaching consequences when they trigger the wrong actions. For instance, taking action on a false positive in a fraud case can incite a great deal of backlash with major ramifications for the company. A false negative leaves your company wide open and vulnerable; both extremes, as well as all the other potential outcomes, big and small across that spectrum, have a direct effect on the bottom line.

“This is why you continuously have to work on the models to keep and sustain their predictive power,” Kuyucu says. “Organizations not only become more data-driven when they wring the maximum value out of their data and keep their models on track, but also protect themselves from harm.”

It’s about trust and internal investment, he adds.

“When a model fails, motivation and buy-in also decreases through upper management,” he says. “Keeping C-suite belief in the value of your AI efforts is crucial to the longevity and success of your initiatives.”

Identifying and repairing model drift

Performance loss in AI models comes in three flavors. Concept drift entails changes in the characteristics of the dependent variable. For example, if the definition of fraud or the way we measure it changes, or if general consumer trends are shifting, a concept drift happens.

Data drift refers to changes in the characteristics of independent variables. That can include seasonal changes, upheavals like COVID, or the density, frequency or volume of usage of the product.

Algorithm drift occurs first when business needs change, so that the algorithms are no longer aligned with business requirements. Secondly, algorithms evolve and improve over time; algorithm drift happens when a company fails to swap out for a better performing algorithm.

Once drift occurs, the model requires human intervention to remain accurate and reliable, and in dynamic business environments, that means regular observation and retraining. Concept drift requires continuous monitoring of predictions versus the actual situations, with statistical measures and tests. Data drift means tracking the statistical distribution of input data.

Open libraries like Python have tools to help measure these three drifts, but detecting algorithm drift also requires the expertise of sharp-witted data scientists who stay on top of technology innovation and ensure efforts are aligned with evolving customer needs.

Taking a company-wide continuous AI approach

Continuous AI also goes beyond AI initiatives — it should be a component of the overall data ecosystem, supporting the value of data across the organization. That includes internal prescriptive and descriptive analytics, collaboration and more.

“AI models became a crucial part of our business processes since they’re integrated to our operational processes, so it’s been very important to continue to keep, and even increase, their performance and reliability,” Rodriguez says. “Our models, which predict which clients might churn, were very accurate when we first built them, but we observed their performance decrease over time. After strategizing with Blue.cloud to develop the right continuous AI approach, we can maintain their accuracy, and thus the business value.”

In the end, achieving ROI from AI requires three dimensions: people, technology, and processes, and continuous AI is central to all these pieces. The right technological infrastructure requires well-maintained data pipelines, lakes and warehouses; streamlined processes to monitor, recalibrate and execute algorithms; and synchronized teams across business areas to build and operationally support those models. Cooperation and real-time data is essential to nailing down business demand and how to map a problem to a model, how to measure and monitor performance — and ultimately business value.

“AI models are about efficiency, either by increasing revenue or decreasing costs. In order to keep benefiting from the value of AI, continuous AI is a crucial approach,” Rodriguez says. “My advice to a business leader would be to have the right governance and organization, and also to work with the right partners that have the right combination of people with business understanding, data science and technological skills.”

Learn more here about the strategies, tools, and solutions that are helping companies eliminate obsolete models and reach their AI goals, even as the technology evolves.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK