3

New practices needed for trustworthy AI - some expert commentary

 9 months ago
source link: https://diginomica.com/new-practices-needed-trustworthy-ai-some-expert-commentary
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

New practices needed for trustworthy AI - some expert commentary

By George Lawton

December 19, 2023

Dyslexia mode

trust

Trustworthy AI is growing increasingly important as enterprises pick up the pace of rolling out new AI services. But at the moment, there are too many theories and not enough practice, says Brian Green Markkula, Director of Technology Ethics at Markkula Center for Applied Ethics at Santa Clara University, who explains:

Companies are making real decisions for real products, and therefore, they need to institute real processes to help safeguard themselves and the public from risks associated with their products.

We've previously covered some essential aspects of the kinds of operational metrics and practices that could help. In this missive, diginomica reached out to various experts on other aspects that need to be considered around security, data, disclosures, and certifications.

Security

Daryan Dehghanpisheh, President and Co-Founder of Protect AI, argues that enterprises need to replicate and extend the transparency, audit, and continuous monitoring controls already in place for operations and infrastructure security and the traditional development and app security processes to the Machine Learning Operations (MLOps) lifecycle. He says:

“Machine Learning (ML) security is business critical, but current tools and security practices are not sufficient to meet the emerging need for a robust ML security strategy tailored to the unique threats and risks of AI applications. The dynamic nature of ML Systems introduces many unseen data risks in the workflow, such as the accidental leaking or inclusion of PII, that both the Security and ML teams have no visibility into. Furthermore, organizations do not have the human talent, resources, or tools to easily detect the threats and vulnerabilities that are likely present in their ML systems.

This process of extending existing monitoring to AI/ML needs to begin with understanding and knowing all the assets involved in building and managing production AI/ML systems. Since the underlying models are a new type of asset in enterprise infrastructure, organizations need to be able to scan, identify, and enumerate all the components in an AI application. That, in turn, starts with an inventory list of all those components. 

The AI security industry is starting to characterize this as a Machine Learning Bill Of Materials (MLBOM) that can support the same security and governance workflows as a traditional software bill of materials (SBOM). The MLBOM can attach new capabilities, such as model-specific vulnerability scanning tools and supply chain threat feeds unique to the components of an AI application and ML system. This, in turn, enables purpose-built incident response management tools that allow an organization to know where every version of every model is, what it contains, why it may be vulnerable, and how to create runbooks to fix issues.

Dehghanpisheh argues that AI efforts will need to account for a variety of new threats that differ from traditional security and governance workflows:

Vulnerabilities in AI/ML are unique and novel - like prompt injections. But, they also have traditional types of issues, such as Remote Code Execution (RCE), Local File Inclusion (LFI), Remote File Inclusion (RFI), and authentication bypasses. For us to build more securely, AI will require modifications to how we think about threat severity and classification. Traditional CVE methods may be a good starting point, but that construct doesn’t map entirely well to the AI class of software. The industry will need to think of a new framework for identifying and classifying vulnerabilities in AI/ML systems.

A good starting point might be extending existing industry security collaborations to support AI. For example, the Financial Services Information Sharing and Analysis Center (FS-ISAC) serves as valuable wisdom of the crowds and consortium-based approach for advancing cybersecurity and resilience in the financial industry. It is also important to distinguish between AI threat insights and mitigation methods. 

For instance, Protect AI's Huntr (huntr.com), a crowd-sourced AI/ML threat intelligence platform, can serve as a central repository for known threats across industries. Vertical-specific organizations like FS-ISAC can then customize how these threats from Huntr may affect their members, enabling more adaptable and personalized risk mitigation strategies. This approach ensures that vulnerabilities and solutions are known while allowing flexibility in their application based on varying industry and government risk tolerances and regulatory frameworks.

Tangibly monitoring data

Caroline Carruthers, CEO of Carruthers and Jackson, a global data consultancy, says that AI/ML innovations and processes are an extremely important area, but this cannot be an excuse for not learning. She says:

Where organizations are currently struggling from a data and AI perspective is understanding what they feel they can tangibly monitor and what is actually making a difference. This includes safety and risk management. When it comes to AI, it’s not about making the right decision, it’s about proactively making a decision and then learning from the safety and risk management implications that have knock-on effects down the line. It’s a constant iterative process to keep up with what’s happening in data and AI, and failure to move quickly will see organizations left behind.

One challenge is that data management processes have become unwieldy because there are a lot of best practices in place, but organizations are learning too slowly. To improve processes around AI, organizations need to build for agility and flexibility. This requires a constant, never-ending virtuous cycle to keep ahead of the technology changes happening all around us all the time. Carruthers says:

You can’t prepare for the unknown, but you can accept that the ‘unknown’ is your current state of reality, and then it becomes easier to adapt more quickly. What people forget is that AI is just a tool, and, like any tool, it has the potential to transform the way we all do our jobs. A good parallel is the impact machinery had on the manufacture of cloth during the Industrial Revolution; machines meant that looms were no longer used for mass market fabric production, but we saw the advent of power looms, but people were still employed to operate the machinery, and a new market was created where a premium would now be paid for ‘artisanal’ fabrics created using the old looming method. Similarly, when it comes to AI safety monitoring, we don’t know precisely what’s going to happen, but it’s still happening around us, so we need to embrace the chaos to move fast enough to get the most value from it.

Disclosures

O’Reilly Managing Director Tim O'Reilly has suggested that companies should disclose what AI systems are being told as a means for regulating AI systems more effectively. Some companies do provide this.  Anthropic’s constitutional AI approach involves a set of directions, or principles, given to their AI system, Claude2, to direct its choices.  

Ann Gregg Skeet, Senior Director of Leadership Ethics at the Markkula Center for Applied Ethics at Santa Clara University, believes this approach, over time, might cause organizations to coalesce around a set of guidance for machine learning systems.  However, she is skeptical this will become widespread without industry certification practices. She argues:

As much as I would like to believe such common practices can be developed and required, the medical and financial professions have something that the AI industry does not have, namely a certification process for the professionals in it.  The Generally Accepted Accounting Principles Mr. O’Reilly references are upheld by certified public accountants or CPAs.  Doctors are also board-certified.  At least for now, no such certification process exists for people building AI systems. Certification processes align interests by specifying professional ethical standards. Doctors and CPAs understand that certain actions and behaviors are expected of them and that failing to act in a way that meets the standards of their profession could result in losing their certifications. In this way, they have a commitment to their profession that is even more significant than any commitment they might make to an organization they are working for. There is not yet such a set of commitments that people in the AI industry are making to a single set of standards.

Currently, the AI industry has a proliferation of standards, many with similar phrasing and proposed requirements, including standards developed at the company level, which means they function more as internal guidance rather than industry standards. One result is that the US National Institute of Standards and Technology has started creating documents like this one, which shows the similarities and differences between the NIST AI Risk Management Framework and the ISO/IEC 42001 standard, which is intended to be a certifiable AI management system framework once it is adopted. 

Skeet says:

These are just two of many sets of principles and standards that have been developed. Which standard the industry coalesces around remains to be seen. If individuals also needed to become certified to work on AI systems, then we might see more solid standardization around a single set of professional ethics along the lines of the medical and financial fields.

Retrieving the standards

Another approach might be to find ways to embed relevant standards and practices directly into AI processes in an actionable way. Fluree CEO Brian Platz said the industry must look for ways to incorporate relevant guidance into the retrieval augmented generation (RAG) processes used to narrow and hone generative AI results. In this approach, RAG could continuously update and inform models on industry-wide regulations and safety guidelines. Platz explains:

Part of the issue in AI safety and risk management is just how rapidly new models are being developed and deployed, juxtaposed against the ever-changing regulatory and privacy landscape. RAG models can dynamically pull in the most recent information and guidelines (provided they are established by the relevant governing bodies, as well as internal risk and safety teams), ensuring that the AI's responses and actions are aligned with the latest standards and best practices. This adaptability should be a de facto component of a private organization's AI strategy. By integrating RAG into AI systems, organizations can create models that are not only more responsive to current information but also more compliant with the latest regulatory and ethical standards.

My take

Enterprises are still in the early days of sorting out how to implement trustworthy and responsible AI. This will be no easy task since we are still sorting out the implications of bias, hallucinations, and privacy around the use of these new tools. 

Ideally, boards and executives would take these on as a priority to avoid lawsuits, bad publicity, and regulatory overreach. But they may need a little motivation to jumpstart these efforts. As Green says:

In an ideal world, everyone would just do the right thing, no governance would be necessary, and tech products would always be beneficial. The next best world is one where tech companies act as another layer of defense for the public good, self-regulating and achieving the same good end. And the most realistic world is one in which government will step in as a third layer of defense because neither individuals nor corporations are reliable enough. We need all of those layers - individuals, corporations, and governments - to work together to assure the beneficial promise of tech for society.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK