4

Enterprises have little idea what they are buying with AI – but that's not going...

 1 year ago
source link: https://diginomica.com/enterprises-have-little-idea-what-they-are-buying-ai-thats-not-going-stop-them
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Enterprises have little idea what they are buying with AI – but that's not going to stop them!

By Chris Middleton

August 17, 2023

Dyslexia mode

buying

“I’m worried about AI, but simply must have it!” is the mantra of the day, as survey after survey reveals organizations rushing to install the technology at the heart of the enterprise while expressing their fear of doing so. 

This speaks of tactical, me-too adoption rather than considered strategic moves by business leaders; troubling, given the nature of the fears expressed by many organizations, analysts, academics, and even tech titans. See our recent report on AI in education.

The latest research comes from Financial Services: a sector for which AI may either be a boon, or an existential threat to professional jobs (especially junior ones).

According to the latest EY CEO Outlook Pulse survey, from the professional services giant formerly known as Ernst & Young, nearly two-thirds of finance bosses believe that not enough is being done to prepare the sector for the unintended consequences of AI. These include bad actors, misinformation, deep fakes, privacy risks, unethical use, and more – a catalogue of unaddressed problems.

Despite this, nearly all of the Financial Services chiefs surveyed – 90 out of 96 CEOs – are actively investing in the technology, with over half already using it (surely begging the question of why so little is being done to address the risks: isn’t that a leader’s responsibility?).

So, why the rush? Because limiting AI research – and its implementation – will only “hold the sector back”. That’s according to Dr Yi Ding, Assistant Professor of Information Systems at the Gillmore Centre for Financial Technology, an institution within the Warwick Business School dedicated to AI and machine learning research. 

But hold them back from what, exactly? Ying, who is also a Teaching Fellow in Banking and Finance at the University of Southampton, says:

As with any new technology, organizations must be aware of the risks it poses, but artificial intelligence has already proven itself a valuable tool for tasks such as data analysis to support the detection of financial criminals, chatbots to support customer service in online banking, and forecasting to support strategic decision making for business growth. Missing out on these benefits is not an option for such a fast-moving sector.

But he adds:

The Financial Services sector should embrace the R&D programmes being undertaken by academic institutions for the trustworthy development of AI and prioritise a safety-first approach during its adoption. Once staff are given proper training, and organizations mitigate the risks, they will be able to reap the significant benefits that AI has to offer the industry.

But shouldn’t that training and mitigation come first, not after the fact? That's something the EY report observes is just not happening.

Another survey comes from London-based strategic skills and professional coaching provider, Corndel. It reveals that younger, Generation-Z workers (18 to 34-year-olds) are particularly worried about AI, believing it will take at least 50% of their job in the next 10 years.

Ouch. The wider survey of 300 data leaders and 1,500 data-focused employees found that 61% of all respondents believe the technology will take at least 25% of their job this year(!), with nearly 40% of younger workers fearing the above-mentioned employment bloodbath within a decade.

The skills challenges of this – Corndel’s focus – should be obvious. But the survey paints a bleak picture of the reality, echoing the fears expressed in EY’s report. Ninety-two percent of data employees report a significant skills gap, with 82% citing “no training in AI use” whatsoever.

So, just to recap what we’ve learned from these surveys so far: a majority of organizations fear AI, yet most are rushing to adopt it without addressing the risks. But they lack even basic skills in its use, and offer no training. Meanwhile, younger employees ‘sit on their hands on a bus of survivors’, to quote David Bowie, while their jobs are dismantled by the technology.

Logic suggests that the wider political implications of this demand urgent attention. 

Consider this. Today’s debt-saddled young workers, most of whom will struggle to get onto the bottom rung of the property ladder (which their parents and grandparents sit atop of), have already had their education, personal development, and progress stunted by the pandemic, by the long tail of austerity, and by the recent political decisions of their elders (who they will spend long years caring for), Brexit being one obvious example.

Those young people now find themselves on a burning platform of jobs vanishing to AI. Meanwhile, inflation soars, many families are forced to choose between ‘heat or eat’, and landlords ramp up city rental prices. And just to add insult to generational injury, bosses are spraying AI propellant everywhere, while saying how worried they are about it. 

Does that sound like an optimistic, sustainable future? One wonders how much more Generation Z can be asked to bear on our overheating planet. The first generation, perhaps, to be worse off than its parents, with a future that looks increasingly bleak. For God’s sake, leaders, stop and think about this. Isn’t that what you are paid for?

But it is not just younger workers who are being left behind by leaders’ tactical, me-too buying policy on AI. Ninety-six percent of Generation-X and Boomer employees – those over 55-years-old – report a complete absence of AI training and skills in their organization! 

Corndel’s ‘Better Decisions, Realised’ report also reveals that 44% of employees believe the lack of time allocated to learning and skills development is a major challenge in their data-focused roles – a figure that rises to 55% in larger organizations. 

Risks

Finally, a new report from research provider Prolific, the University of Michigan, and London-based digital studio Potato, offers a different but related view of AI risks. It reveals how people’s different socioeconomic and ethnic backgrounds influence their decisions about what is, or is not, harmful or offensive content. 

The dangers of AI bias, especially historic bias against minorities and women in training data, have been well documented in recent years – see our piece on Calvin Lawrence’s book Hidden in White Sight, for example.

However, the Prolific report offers an important new perspective: that people’s differing viewpoints are key when it comes to labelling the data used to train and build AI models. 

Well said. As our industry is overwhelmingly male and white – and we are surrounded by rich, straight white males ranting against ‘wokeness’ – the implication is that coders and algorithm designers may be allowing data through that is offensive to women and minorities. Remember: technology should work for everyone, not just for its white male designers.

Findings from the report include that black participants tend to rate training content as significantly more offensive compared with other racial groups, while women judged the content as being less polite than men did. In short, differing backgrounds and genders are statistically significant when analysing training data and must be considered in that process. The report says:

Findings like the above of course have huge ramifications for the development of AI, as what one person finds (or interprets as) offensive or impolite, another may find perfectly acceptable; with the danger being that existing biases are baked into AI systems potentially causing AI-inflicted bias and discrimination.

Prolific co-founder and CEO Phelim Bradley – a physics and computational biology DPhil (PhD) from Oxford University – adds:

This research is very clear: who annotates your data matters. Anyone who is building, and training, AI systems must make sure that the people they use are nationally representative across age, gender, and race. Or bias will simply breed more bias.

My take

As ever, caveat emptor. Or perhaps a more accurate aphorism for today should be, ‘Think first, then (perhaps) buy later. And take your time.’


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK