2

Generative AI 'knows nothing' says Software AG's CPO. So what are the enterprise...

 1 year ago
source link: https://diginomica.com/generative-ai-knows-nothing-software-ag-cpo-enterprise-use-cases
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Generative AI 'knows nothing' says Software AG's CPO. So what are the enterprise use cases?

By Phil Wainewright

March 21, 2023

Dyslexia mode

Assorted mechanical robots photo by Jehyun Sung on Unsplash
(Jehyun Sung on Unsplash)

Last week saw several significant announcements in the world of generative AI, including OpenAI’s launch of GPT-4, the latest version of the Large Language Model (LLM) that underpins its ChatGPT service, and the announcement of Microsoft 365 Copilot, which brings GPT-4 and other modeling to Word, Excel, Powerpoint, Outlook and Teams.

GPT-4 brings a step up in capabilities over its predecessor, adding the ability to read the contents of images and therefore is properly described as a Large Multimodal Model. It is also less prone to make things up when producing an answer — a common problem with present-day LLMs known as hallucination. Microsoft's CoPilot meanwhile brings generative AI firmly into the business realm, taking the familiar experience of autocomplete to a whole new level, with its ability to draft entire passages of text, emails, slide decks and conversation summaries.

Amidst all of the excitement building in the tech community around generative AI, what are the practical enterprise use cases today and where are the limitations? Those enterprises eager to experiment with ChatGPT in their own applications and operations now have access to a ready-made API connector to the OpenAI service, introduced last week for Software AG's webMethods.io integation platform. I spoke to Dr Stefan Sigg, Chief Product Officer at Software AG, who says there are interesting use cases, but enterprises have to be wary of the technology's limitations. He comments:

I think we are evolving to get a feel what does this thing does well and what it doesn't. We need to overcome the perception that this thing knows everything. It doesn't, it truly doesn't. Actually, it knows nothing.

He explains that the way these models work is by assembling answers from the large mass of data they have ingested but with no real understanding of the answers they produce, in the same way that a machine translation engine assembles translated sentences without any innate understanding of the two languages. Therefore it's up to human ingenuity to work out the best way to harness their capabilities. He adds:

I think that people will get smarter and smarter in judging what kind of questions are good questions for ChatGPT and whatnot. Then I think that will be a revolution. If you continue to ask good questions, you will get good answers, and that will make a difference.

Huge volume of data

For example, in the realm of integration, ChatGPT could be helpful when developers are creating mappings between APIs, when the fields in one dataset need to be matched to fields with different names in another dataset. He explains:

That's typically something which can be tedious and error-prone. Even in the past, there has been some promises that AI would be helping that a lot. But that wasn't really the case, because there was a lack of data behind the model.

Now it’s possible to get a suggested mapping from ChatGPT, based on the huge volume of data across the Internet. He goes on:

What's the likelihood that this mapping has been done by somebody already, somewhere out there? That mapping might have been documented in Stack Overflow or in GitHub, or wherever. So there is a chance that GPT will find it. I think this is interesting.

I can imagine that for some of the mappings, there will be a great answer. Other mappings, maybe not. But at least you can ask the question, ‘Before I try too much, do you have a good suggestion for me to make those fields?’ And that could be very helpful.

The volume of data that the model has access to is the key to its potential success in this example. Sigg cautions that more restricted enterprise datasets may not be capable of producing the same results. He says:

People think that the magic happens in code, which it doesn't ... It happens by the excess of that vast — and now with ChatGPT4 even bigger — amount of possible answers out there, that is the key.

There's no point in my point of view, to have ChatGPT doing its magic on a bunch of files that you have on the file server. You can do that and I know that you can also use the augmentation feature and it's possible that this will also be interesting. But the principle of that technology is, of course, the incredible power of finding what already has been done.

My take

Despite its shortcomings, generative AI feels like a technology that is going to have a big impact. I'm with my colleague Jon Reed — "At least the emperor has some clothes on this time around." But we're still in the very early days, in my view long before the truly disruptive effects begin to be felt.

Thinking about previous technoology waves, I'm most reminded of the early days of the Web, before the likes of Amazon, Google and Salesforce got started. There was enormous excitement about the potential of accessing all the world's information, but no one had organized it. The keyword and classification search engines of the day — Infoseek, Alta Vista, Excite, Ask Jeeves, Yahoo! and the rest — valiantly tried. But it was not until the innovation of Google's pagerank algorithm brought order to the Web that it finally became possible to reliably find useful results. Generative AI is a raw technology that won't be harnessed reliably until similar innovations come along to package up its capabilities to help people get the results they want or need.

In the meantime, the typical enterprise reaction to these new technologies is, 'We must have one of those,' while missing the real opportunity it brings. So in the early days of the Internet, every enterprise sought to build its own version, called an intranet — often with no access to the open Web except for a handful of priveleged exceptions. Later, when cloud computing came along, every enterprise wanted its own private cloud, where it ran SoSaaS implementations of client-server software. Now every enterprise wants to build its own LLM, without waiting for the technology to be reliably harnessed and productized. Those experiments may very occasionally yield useful outcomes, if only through the learning won from making mistakes, but the vast majority will be a huge waste of money and effort.

The other error that people make with new technologies is that they harness them to do the same things they've always done in the past. Early e-commerce sites were built with the same structure as the paper catalogs they replaced, without a search function or recommendations engine. Mobile apps force users to complete lengthy forms on-screen instead of creating a workflow with pre-filled information. GPT-4 will be used to add fake personalization to mass-market sales campaigns, when the technology would be better used to tailor automated interactions to the implicit preferences of the individual user.

Over the coming months, a handful of tech entrepreneurs will start ventures that begin to realize the true potential of generative AI and allied technologies, and in a few years' time the full disruptive impact of this new wave in tech will start to become discernible through the noise. In the meantime, the current iteration of the technology has its uses, but this is not the time to be making big, enterprise-scale bets. Rather, start small, experiment, recognize its limitations, and keep an eye out for those innovative use cases coming out of left field that could upend your job or your industry.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK