3

AI - will GPT put millennials out of work? Sisense CEO warns ‘the middle’ will f...

 1 year ago
source link: https://diginomica.com/ai-will-gpt-put-millennials-out-work-sisense-ceo-warns-middle-will-fall
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

AI - will GPT put millennials out of work? Sisense CEO warns ‘the middle’ will fall

By Chris Middleton

April 17, 2023

Dyslexia mode



An image of a person standing between two dark shadowed buildings

(Image by StockSnap from Pixabay )

As the big noise about large-language and generative AI grows louder, with some users desperate to associate themselves with rash consumer applications, some quieter but more compelling use cases are emerging for the enterprise.

Among them are tools that enable generative AIs to query trusted corporate data with natural-language prompts, producing insights about financial performance, sales, demand, and more. Jostling for market share in this space are vendors such as analytics specialist ThoughtSpot and revenue-focused Clari.

Their core proposition is simple: by querying trusted, bounded data – rather than insights trawled from the Web – enterprises can focus on their own crown jewels rather than polish someone else’s. A persuasive approach: one that, on the face of it, has few downsides. 

But is it as simple as that?

Add to this fast-emerging group business intelligence provider, Sisense. The software maker has a roster of 2,000 customers, including Rolls Royce, General Electric, Verizon, GitLab, Seismic, Skullcandy, Nasdaq, and Philips Healthcare, with a portfolio centred on AI-enhanced cloud platform Sisense Fusion. Solutions are vertical as well as horizontal, with bespoke products for retail, manufacturing, healthcare, financial services, tech, pharma and life sciences. Impressive stuff. 

As with the company’s peers in this space, Sisense sees OpenAI’s GPT engine as critical to unlocking business insights. Indeed, CEO Amir Orad recently said that ChatGPT “must” be integrated into future analytics tooling – at least, that was the suggestion in a recent communication from the company’s PR representatives. 

But “must” is a strong word for any company to use, or to have associated with it (perhaps wrongly) in corporate communications. 

And, if there is a hierarchy of “musts” in the GPT era, a different one is surely near the top: data analysts or scientists – until 2023 among the most sought-after, premium-skilled staff – must be worried they are all about to be made redundant. Lots of other skilled, experienced, talented experts feel the same way. Where will the AI snowball stop? 

This is no groundless fear: a recent estimate suggested that up to 300 million jobs in the US and Europe may be affected by the technology (out of 316 million). Meanwhile, the PR-produced message from Sisense said:

General data experts and data scientists are no longer needed. […] Amir can discuss why engineers now hold the data power in the enterprise and how enterprises must adjust accordingly.

Another “must”, with the corporate centre of gravity – and therefore economic power and advantage – shifting ever closer to the world’s coders, engineers, and software makers. And further away from… everyone else. Isn’t that a problem?

An unknown impact

So, I spoke with CEO Orad about his vision of the future, and asked whether analysts and other experts have any place in this brave new world, given his company’s enthusiasm for GPT, and similar moves by his competitors. Our upbeat conversation soon became heated, which – if nothing else – reveals the human stakes in the GPT era. 

Orad said:

We're a late-stage start-up. When I say ‘start-up’, people debate with me because we're a $150 million business already. But I still think of it as a start-up. And all we do is help businesses put insights and analytics into products for customers. 

For example, Nasdaq is using Sisense to provide more innovative products, insights, and understanding of the data running through Nasdaq, so that CFOs of public companies can understand better who's trading stock. And GE is using us to help doctors understand better what's happening in medical devices. That's the big picture. 

And what we’ve seen in the past few years is, despite all the progress with analytics and data, there's still a big chasm that is preventing most humans from accessing them. In the CFO example, they want insights over stock inside the system from Nasdaq, and not in a separate one. And doctors want insights from the systems they use from GE or Philips, and to not need a separate tool.

So, the next thing was how to leverage AI to make it accessible to doctors, nurses, or finance people who are not experts in data and analytics.

OK, so what is the impetus behind the recent association with OpenAI and its GPT engine? Orad said: 

Most people think of ChatGPT as a good creative agent that writes poems, text, and songs and all that good stuff. But it is really an interface to the entire dataset that humanity has created over the last 2,000 years. 

So, we thought, ‘Hey, if that's the case, if it’s all of humanity's data – and I'll put an asterisk there about the quality of the data – and it’s available in chat format, then can we create a bridge to data analytics. 

There are so many examples of the creative things you can do with it. All your data, overnight, that business users can take advantage of. From ‘Hey, this customer’s name is Amir. What is his likely origin, and what language does he speak?’ To ‘This person is in that location, but is it a high-net-worth location, or low-net-worth one?’ To ‘Is this TV channel more liberal or more conservative, because we see different performance in our ads?’

This isn’t science fiction, you can do all of that in seconds – and for pennies or cents, versus months and thousands, or tens of thousands, of dollars. I think that's a game-changer. We [people] don't even appreciate it.

But this is exactly why Italy has banned the technology until privacy guardrails can be put in place. What if the data subject has never consented to those questions being asked? What if they are pursuing the right to be forgotten under GDPR? Just because OpenAI, or any another provider, scraped the Web for pre-2021 data doesn’t mean that data was, legally speaking, in the public domain.

And what if the assumptions about identity, net-worth, or zip codes are wrong, or based on incorrect assumptions? Or what if they are rooted in the kind of historic, human-biased behaviour that has long impacted minorities – denying them employment or access to services? (See our recent interview with IBM’s Calvin Lawrence for more.)

Orad also talked about being able to grab insights for pennies, rather than paying experts for them. But surely the further we go down that path, the more we are creating a situation where almost nobody's work has monetary value anymore – except that of coders and software engineers?

He responded:

There are three dimensions to this: the impact to humanity, the impact to experts specifically, and the dimension of power concentration and risk that comes with it. But let’s put aside the concept of AI going rogue, which is its own extreme kind of scenario…

 I didn’t mention that. I’m talking about creating an economic black hole, in which all power and money goes to software makers – companies that in some cases are already worth trillions of dollars. Meanwhile, everyone else’s skills become worthless. That’s unsustainable, morally and economically. 

Orad continued:

I believe for humanity, it will have a profoundly positive impact, because the child in the desert with an internet connection, and one cent of spend which can be covered by an ad, will get an expert opinion equivalent to a PhD in Manhattan. Or the farmer who is growing crops and battling a new disease or weather pattern: he can get advice based on humanity's entire collective knowledge. Or the kid in China who’s trying to learn math because she's very talented, but cannot get access to tutoring expertise in her village. 

So, the impact for humanity of having easy access to expert opinion, on the fly, at no cost? I think it's priceless. Knowledge is spreading, so people are more empowered.

A lost generation? 

I understand the importance of democratizing information and spreading wealth to developing economies and excluded people. The point about removing the barriers between information haves and have-nots is a good one. But the internet does that already, and it has made technology companies, Microsoft and OpenAI among them, into vastly wealthy gatekeepers of other people’s information, which those people now struggle to monetize themselves.

Meanwhile, all of human knowledge isn’t on the Web: it’s a fallacy. Huge amounts of data have vanished. Most of the data that is there has been added recently, and – in many cases – distorted to make it easier for search algorithms to find. Google is like looking at the vast landscape of human knowledge through a pinhole: almost no one goes beyond page one of any search. Plus, millions of books – in libraries and elsewhere – are not on the internet. But we have stopped reading them, or even looking for them. 

So, that utopian vision is attractive, but it’s as much an illusion as it is a noble aim. Democratized information is a wonderful, transformative idea, but the Web is increasingly about lazy consumption – of noise and relentless advertising. The less effort people make to find good, trustworthy information, the less they find it. And AI allows them to make no effort at all.

Orad responded:

You talk about experts. And I do believe we'll see a very real impact within a decade on many experts – typically, white-collar jobs. And I think it will actually impact the middle of the spectrum. 

The real experts will be better off, because they're already sought after, and they’ll have AI to augment their skills, so their studying and research will be faster than ever. I believe they’ll be better off – to train the machines, and their opinion will be required more than junior people’s.

We have a gap in the world, in humanity: doctors, there's not enough of them. There are not enough nurses. There are computer experts, but not enough data engineers. But suddenly anyone with basic knowledge will become more effective with AI. 

The people that will be impacted, I believe, are actually the in-betweens, not the ‘expert experts’. And not the junior information workers, but those in the middle – who’ve been in an industry for 10 years, only to be replaced with a junior with a keyboard.

Now that is interesting. Is Orad essentially saying it’s not today’s school- and college-leavers who will be hit, or the people who have amassed knowledge and expertise over decades of work and study, it’s the people in the middle. But isn’t that mainly millennials and Generation Z?

If true, that’s an apocalyptic scenario for Western economies. Many of that generation are already saddled with debt and living in expensive rented accommodation, but unable to get on the property ladder unless they have rich parents. Many are existing on depressed wages with high inflation, and with the prospect of looking after their ageing parents too – who the state, apparently, can no longer provide for. 

And now AI companies are saying, ‘AI can do your job better – for nothing!’ And that claim is backed by corporations with trillion-dollar market caps? You don’t see any of that as a problem? Software makers are eating every other industry alive, including workers’ wages.

Orad said:

On the point about centralized power. If AI ends up being a single technology that only one company can possess, a single tower, then that is too much power for any one company to have. However, I'm sure you've seen that there are already copies of ChatGPT that are almost as powerful, which someone has created for $100 and run on a laptop. 

Apparently, creating it is really difficult and expensive, but duplicating it is not.

My take

Indeed. A bit like creating two millennia of human data, in fact, then a software giant simply scrapes it and sells it. But that would be the ultimate irony, wouldn’t it? OpenAI undone by its own business model, duplicated on a laptop for free.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK