6

IBM marks 2024 as the year of balancing technology and trust

 8 months ago
source link: https://diginomica.com/ibm-marks-2024-year-balancing-technology-and-trust
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

IBM marks 2024 as the year of balancing technology and trust

trust

In 2024, as you are probably now aware, it is election year in 40 countries. The electoral action kicks off with Taiwan this month and concludes in November with the US. Indeed, Bloomberg Economics calculates that 41% of the world’s population will be electing new leaders this year.

We also know that generative AI is showing promise in many walks of life, including the dissemination of disinformation and deepfakes that could skew the results of many of these leadership contests in the biggest election year in history.

Already under scrutiny in 2023, 2024 is the year that Big Tech needs to prove it can be trusted by governments, citizens and business. No surprise then, that IBM is mobilising its market positioning to address the issue of responsible AI, with IBM Institute for Business Value (IBV) declaring that, “our research shows 2024 will be the year when business leaders need to balance technology and trust.”

Will lack of trust curb business spend on AI?

According to IBM’s IBV report on Responsible AI and Ethics, “globally, fewer than 60% of executives think their organizations are prepared for AI regulation—and 69% expect a regulatory fine due to generative AI adoption.

In the face of this uncertainty and risk, CEOs are pumping the brakes. More than half (56%) are delaying major investments until they have clarity on AI standards and regulations. Overall, 72% of executives say their organizations will forgo generative AI benefits due to ethical concerns.”

Heather Gentile, Director of watsonx.governance Product Management, at IBM Data and AI Software, comments that ever since the launch of ChatGPT, organizations have restricted their policies and procedures regarding the adoption of AI because of ethical concerns. In order to develop responsible AI, organizations need to have metrics and controls in place.

However, risk management is challenging because projects are typically spun up in siloed areas of activity. Generative AI introduces new risks including hallucinations, safeguarding PII data, eliminating profanities and so on. 

Organizations need to raise their governance game for generative AI

IBM first launched its Watson platform in 2010 but market acceptance of the technology at that point was slow. Last year IBM recast its Watson capability by introducing the watsonx platform, watsonx.ai and watsonx.data (enabling the platform to access data from any cloud).

The firm has recently added watsonx.governance to this watsonx portfolio, an addition which assesses requests and evaluates models for use cases. It is designed to model facts and assess how a model changes over time, checking balance, drift and risk, and ensuring adherence to regulations. Watsonx.governance is infused into the watsonx.ai workflow, monitoring the AI lifecycle from the request of use case, through the testing of models to the ongoing evaluation of the model. As soon as an alert comes in from watsonx.governance, it goes to prompt engineering and the governance process is captured automatically. Watsonx.governance comes with model inventory to match requirements and controls for risk management and compliance in real-time. 

Given that understanding the journey of data from source systems to end use is arguably as, or more, important as understanding the LLMs themselves, IBM has further strengthened watsonx.governance by acquiring Manta, which provides IBM a way to govern the data feeding the models. Manta supports over fifty scanners and offers catalog integration (pushing data into many catalogs). It can perform technical lineage of data in terms of, say, the syntax of rules as well as historical lineage in terms of how the data for a pipeline has changed over time. 

Obviously successful governance requires broad organizational engagement as well as tooling and so IBM Data and AI Software is clear that it works closely with IBM Consulting to help clients understand their maturity for adopting generative AI.

To be effective, a wide variety of stakeholders need to be involved in the development of an organization’s responsible AI approach – the entire C-Suite has to be engaged to push best practice out for different stakeholders ensuring that the data scientists, operations researchers, application developers and machine learning engineers are working in line with both organizational ethics and external legislation. 

My take

The ability to regulate and manage use of generative AI is going to be a huge tech trend in 2024, not least because governments, law enforcement agencies and the big social media platforms will be keeping an eagle eye on its illicit use to sway election results. Businesses are understandably already concerned about the liabilities use of AI may make them vulnerable to, while also attempting to balance this with the competitive opportunities the technology offers.

IBM is in an interesting position in this context as it has the advantage of owning both AI technology and a large professional services arm, which distinguishes it from competitors such as Microsoft and Google on the one hand and Accenture and Deloitte on the other. It also has a reputation for conservative caution that plays well to many corporate teams attempting to navigate the somewhat febrile atmosphere surrounding the adoption of generative AI.

It is thus possible that the watsonx portfolio combined with IBM Consulting expertise will breathe new life into the old corporate adage that nobody gets fired (sued or fined) for buying IBM


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK