6

AI and financial processes: Balancing risk and reward

 2 years ago
source link: https://venturebeat.com/2021/07/16/ai-and-financial-processes-balancing-risk-and-reward/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

AI and financial processes: Balancing risk and reward

Making money
Using AI while making money
Image Credit: Shutterstock

Transform 2021

Catch up on everything you missed this week!

On-Demand

Watch Now
ADVERTISEMENT

All the sessions from Transform 2021 are available on-demand now. Watch now.


Of all the enterprise functions influenced by AI these days, perhaps none is more consequential than AI and financial processes. People don’t like when other people fiddle with their money, let alone an emotionless robot.

But as it usually goes with first impressions, AI is winning converts in monetary circles, in no small part due to its ability to drive out inefficiencies and capitalize on hidden opportunities – basically creating more wealth out of existing wealth.

Welcome to Transform 2021

Attention to detail

One of the ways it does this is to reduce the cost of accuracy, says Sanjay Vyas, CTO of Planful, a developer of cloud-based financial planning platforms. His take is that while finance has lagged in the adoption of AI, it is starting to catch up as more tech-savvy professionals enter the field. A key challenge in finance is to push data accuracy as far as you can without it costing more than you are either saving or earning.

To date, this effort has been limited largely by the number of man-hours you are willing to devote to achieving accuracy, but AI turns this equation on its head since it can work all day and all night focusing on the most minute of discrepancies.

This will likely be a particular boon for smaller organizations that lack the resources and the scale to make this kind of data analysis worthwhile. And as we’ve seen elsewhere, it also frees up time for human finance specialists to concentrate on higher-level, strategic initiatives.

Finding the bad actors

AI is also contributing to the financial sector in other novel ways — fraud detection, for example. GoodData senior content writer Harry Dix recently highlighted the multiple ways in which careful analysis of data trails can quickly lead to fraud discovery and take-down of perpetrators. Most frauds require careful coordination between multiple players in order to disguise their crimes as normal transactions, but a properly trained AI model can drill down into finite data sets to detect suspicious patterns. And it can do this much faster than a human examiner, often detecting the fraud before it has been fully implemented and assets have gone missing.

Implementing AI in financial processes is not just a way to get ahead, social media entrepreneur Annie Brown says on Forbes — it is necessary to remain afloat in an increasingly challenging economy. With fintech and digital currencies now mainstream, organizations that cannot keep up with the pace of business will find themselves on the road to obsolescence in short order.

ADVERTISEMENT

New breeds of financial services — everything from simple banking and transaction processing to sophisticated trading and capital management — are cropping up every day, virtually all of which are using AI in one form or another to streamline processes, improve customer service, and produce greater returns.

Keeping AI and financial processes honest

Still, the overriding question regarding AI in financial processes is how to ensure the AI behaves honestly and ethically. While honesty and ethics haven’t exactly been hallmarks of the financial industry throughout its human-driven history, steps can be taken to ensure AI will not knowingly deliver poor outcomes to users. The European Commission, for one, is developing a legal framework to govern the use of AI in areas like credit checks and chatbots.

At the same time, the IEEE has compiled a guidebook with input from more than 50 leading financial institutions from the U.S., U.K., and Canada on the proper way to instill trust and ethical behavior in AI models. The guide offers multiple tips on how to train AI with fairness, transparency and privacy across multiple domains, such as cybersecurity, loan and deposit pricing and hiring.

It seems that finance is feeling the push and pull of AI more than other disciplines. On the one hand is the lure of greater profits and higher returns; on the other is the fear that something could go wrong, terribly wrong.

The solution: Avoid the temptation to push AI into finance-related functions until the enterprise is ready.  Just like any employee, AI must be trained and seasoned before it can be entrusted with higher levels of responsibility. After all, you wouldn’t promote someone fresh out of college to CFO on their first day. By starting AI out with low-level financial responsibilities, it must then prove itself worthy of promotion — just like any other employee.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member
Sponsored

Government and business can develop an ethical AI future together, KPMG study finds

VB StaffJune 08, 2021 05:20 AM
GettyImages-1143065746.jpg?fit=930%2C486&strip=all
Image Credit: Getty Images

Transform 2021

Catch up on everything you missed this week!

On-Demand

Watch Now

Presented by KPMG 


The pandemic turned the world upside down and businesses stepped up to the challenge, accelerating their digital transformation and harnessing the power of artificial intelligence to help overcome new challenges in a new world.

A new study by KPMG, Thriving in an AI World: Unlocking the Value of AI across 7 Industries,” found that while some executives are experiencing a bit of COVID-19-induced whiplash as they reckon with AI challenges, industry leaders are optimistic about the new administration’s role in helping to achieve an AI-forward future.

“We reached out to decision-makers, many of whom said AI is moving too fast, but many also felt that the U.S. is being left behind when it comes to AI adoption,” says Swami Chandrasekaran, managing director at the KPMG Digital Lighthouse and Head of Digital Solutions Architecture.

Yet overwhelmingly, industry leaders believe the Biden administration will not only help advance the adoption of AI, they also believe the government has an essential role to play in regulating AI technology as adoption grows.

This confidence comes from a confluence of major events across the globe, Chandrasekaran says, including how the pandemic accelerated activity in the AI landscape among both consumers and enterprises. Major companies and technology vendors are investing more rapidly in the technology, a growing number of AI startups are springing up every week, and the way ordinary people interact in their daily lives has changed fundamentally.

“The huge uptick in mainstream AI technologies coming to the market, data being made available, and AI becoming increasingly ubiquitous in daily life because of the pandemic all come parallel to this change in our administration,” he says. “This intersection point is causing these expectations to rise.”

What industry leaders want from the Biden administration

Business leaders firmly believe the government has an essential role to play in regulating AI technology. And industry execs from industrial manufacturing (90%), technology (88%), and retail (85%) are most optimistic that the Biden administration will help advance the adoption of AI in the enterprise.

Younger respondents were more optimistic, Chandrasekaran says, with 90% of Gen X leaders positive about the current administration versus 79% of baby boomers. But expectations around how and where the administration would play a role in adoption differs, with government execs focused on health care and vaccine rollouts as well as defense and national security.

The industrial and manufacturing industry wants to ramp up AI adoption as a solution for things like the predictive maintenance of equipment including scheduled optimization, product design, and engineering, as well as optimizing the supply chain. Meanwhile, health care execs believe the administration will help adoption in use cases like telemedicine and patient care, as well as vaccine administration.

Advancing AI: Where business fits in

Going forward, while leaders across industries recognize how essential the government’s role is in regulating AI, navigating the evolving AI landscape will have to be a collaborative effort. Trust in government as an authority on AI has been growing, but 33% of respondents identified business as the most trusted authority.

The bipartisan National Security Commission on Artificial Intelligence also recently warned that the U.S. isn’t yet prepared to defend or compete in the AI era. The technologists, national security professionals, business executives, and academic leaders of the committee have spelled out an AI strategy – a comprehensive roadmap for government to defend against AI threats, employ the technology responsibly for national security, and secure the country’s prosperity, security, and welfare by winning the global technology race.

However, to execute that strategy, and to continue driving the AI narrative in the U.S., the committee said government will need to partner with business leaders, academia, and civil society. In part, that comes from the need for responsible, effective AI, Chandrasekaran says.

“Security, privacy, and ethics are posing the biggest risks for AI, and in our study, both business and government decision-makers unanimously agreed that there needs to be an AI ethics policy,” he explains.

However, in the rush to adopt and implement AI strategies, tools, and solutions, particularly over the past year, many organizations don’t yet have an ethics policy in place — or it’s just not being enforced.

Only 53% of government leaders said their department has an ethics policy, while 70% said AI is moving so fast that it’s hard to keep up, so that a policy that works today may be obsolete next week.

Many study respondents were ready to accept the government defining those regulations — including 86% of leaders in financial services.

“Across the board, having a baseline set of governing policies and ethics is not a bad thing for the government to define — but at the same time, make sure you don’t stifle innovation,” Chandrasekaran says. “The government can help define baseline regulation, but after that, the business role is creating the executable version of an AI ethics policy.”

Businesses need to implement conscious, continuous monitoring for bias and drift right from the start as they develop their AI models. Imbalances will and do occur in data and models, and in worst-case scenarios, they can cause businesses to end up as headline news. This monitoring needs to happen alongside more transparency and explainability of AI models. For instance, if a consumer loan application is rejected according to an AI algorithm, it should be clear, from the AI model’s results, why that conclusion was reached including the counter-factual on what should have happened for the loan to have been approved.

Businesses also need to plan for changes to their continuous devaluation, Chandrasekaran adds, because building a model and checking it for bias isn’t a one-and-done operation. As models learn and develop, they must be continuously evaluated for inherited bias and drift, as new data is added. And from a security and privacy perspective, businesses need to continually check the model’s resilience with security penetration tests.

Many clients Chandrasekaran is working with acknowledge the fact that they need to bring bias detection, imbalance detection, and drift detection into their software development lifecycle including DevOps, because at the end of the day, an AI model is a software core, he says. But that’s just the first step.

“If you acknowledge that you need to run these tests, use these tools, then businesses need to ask themselves, which are the metrics to measure and how do you quantify them? What is the threshold based on which you pass or fail? What are the tools and technologies that I need to bring into this process? How should my DevOps for AI look like?” he explains. “Now you’re getting to an executable version of an AI ethics policy.”

Moving forward into an ethical AI future

Business leaders are clear in their belief that AI will yield tangible results for their business and their industry. And they are optimistic about the impact the Biden administration will have on AI adoption and regulations, but achieving those goals requires businesses to make significant investments up front, Chandrasekaran says.

That includes prioritizing, refactoring, or transforming large applications and systems into reusable microservices that would allow for embedding or integrating AI into them. It also includes complying with the data security and privacy regulations that are already in existence.

“Everybody is very conscious of the fact that you don’t want to create an AI model that cannot be measured or quantified for things like bias,” he says. “But care has to be taken to ensure you’re using only the data you’re supposed to use, and respecting the privacy of the individuals from whom the data may have been collected.”

Companies also need to invest in their people, skilling up existing employees, and making them data and AI literate. They must put a solid data infrastructure in place to train the AI models. And, always, they must evaluate AI use cases in terms of their impact to the business.

“There’s a vital balancing act in nailing down the budget and resources needed to implement these AI investments — how do you compete and make tradeoffs with investment in other areas of your business?” Chandrasekaran says. “With clients, we challenge them and ask, why this use case? What is the business value? What’s the return on investment? What metrics can we quantify? There’s always a business value.”

Dig Deeper: Read the entire 2021 KPMG study, “Thriving in an AI World.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK