0

Will 2024 be the year of Shadow AI?

 6 months ago
source link: https://diginomica.com/will-2024-be-year-shadow-ai
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Will 2024 be the year of Shadow AI?

By Miranda Nash

March 7, 2024

Dyslexia mode

Shadow AI and big data concept © peshkov - Canva.com

Last July, a Reuters Ipsos poll found that more than one in four Americans regularly use ChatGPT at work. A similar survey by CNBC in December 2023 found that 29% of employees were using ChatGPT at work. This number will continue to grow as more employees experience the productivity gains of generative AI.

The increasing use of generative AI in the workplace follows a similar trajectory to the cloud and mobile adoption trends that emerged in the mid-late 2000s when employees began bringing their own apps and devices to the corporate environment. And just like adding new apps and devices to the workplace, despite the opportunity for increased productivity, generative AI is not without risk. 

To try and manage these risks, many organizations have implemented policies against the use of generative AI services in the workplace. In fact, according to the Reuters Ipsos poll, only 22% of the 2,625 respondents said their employers explicitly allowed external tools such as ChatGPT. 

As more employees embrace generative AI services, with or without employer permission, here are ways organizations can institute guardrails to prevent negative outcomes.

Business data is not training data

Some generative AI services, like ChatGPT, continue to learn from the data users input into the system. Not only does this expose potentially sensitive corporate data to the internet, but your organization's data may become the source of truth for other similar organizations and competitors using the same service. To prevent corporate data leakage, organizations need to work with generative AI service providers that train models during a specific phase, using non-sensitive data, and then enable customers to leverage the models without adding data.

AI grounded in trustworthy data

Generative AI services that pull from the vast array of content on the internet will have trouble providing information that is relevant enough to be useful in an enterprise setting without being grounded in company-specific context. One way to overcome this challenge is to incorporate a capability known as retrieval augmented generation, which ensures generative AI is delivering results that come from known, trusted company-specific sources. For example, business applications are a great source of trusted data. By limiting outputs to trusted business sources, generative AI services can improve the accuracy of results and the effectiveness of these features for users.

Control user prompts

Hallucination issues can arise as users repeatedly prompt generative AI services in search of an answer that meets their need. Hallucinations may not necessarily be the wacky extreme examples sometimes seen in the media, rather they can be inaccurate information. Inaccurate information can be costly, especially in very sensitive areas such as medical advice. To address this challenge, organizations can incorporate guardrails to prevent hallucinations, such as evaluating and quality-checking responses from the generative AI service, and constructing the prompts on behalf of the user to avoid manipulation of the model and ensure it will deliver high quality, legitimate, and relevant results.

Avoid general generative AI

From a business perspective, generative AI services like ChatGPT can be a jack of all trades and a master of none. That’s not to say they won’t improve, but right now generative AI should be use-case-specific to prevent falsehoods, inaccuracies, and hallucinations from infiltrating the organization. We encourage our customers to look at simple use cases where generative AI can bring immediate value, ground the use case in the right data, and embed it into the workflow of an existing application.

Insist on a human in the loop

Ultimately, employee obligations for accuracy and validation have not changed. A human must be included to review, edit, and approve content created by generative AI. In many cases, the employee will already know if something is true or not. For example, a manager using generative AI to summarize employee performance will know if the first draft of an evaluation aligns with what they’ve seen from the employee, heard from peers, and input into their goal-tracking system. In this use case, the manager’s obligation to produce an accurate employee review is unchanged.

One key lesson from the mobile adoption era is that organizations need to embrace rather than resist employee-driven technological advancements. As we saw with mobile adoption, organizations that recognized the benefits and established clear policies gained a competitive edge. In contrast, organizations that resisted the trend faced challenges in terms of security, compliance, and missed opportunities for innovation.

Just like mobile and the cloud, generative AI will become a bigger and bigger part of how businesses operate. Business leaders should be taking steps to offer generative AI services that employees can benefit from without exposing sensitive data, infringing copyright, or introducing false information to business systems. 2024 is the year that organizations need to make progress in implementing generative AI or risk falling behind competitors.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK