4

Why the struggle for ethics in generative AI is more complex than we imagine

 1 year ago
source link: https://diginomica.com/why-struggle-ethics-generative-ai-more-complex-we-imagine
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Why the struggle for ethics in generative AI is more complex than we imagine

By Chris Middleton

April 24, 2023

Dyslexia mode



right wrong

(Pixabay)

Some generative AI companies have been criticized for scraping the Web for training data, including copyrighted material, without the consent of creators and rights owners. In this way, the work of any number of creative, expert humans may lie behind your generated content – but unnamed, uncredited, and unrewarded financially. 

(These issues have been examined in my three recent reports on IP in crisis, which looked at metaverses, the visual arts, and music.)

Meanwhile, the ‘AI’ – in many of these cases really a derivative work generator – simulates intelligence by recycling skilled humans’ output. Some might see that as little more than a confidence trick, despite other AIs’ unmatched, transformative abilities to spot patterns in data. The latter may lead to real advances in science, medicine, healthcare, new materials, sustainability, climate tech, and more. Let’s hope so.

A shame, then, that a handful of more cynical companies are leading the way in public acceptance of the technology, by simply scraping the Web and waiting for lawsuits to roll in. Not just in the worlds of writing and music, but also in other creative fields, such as photography and video – where many of us have unwittingly added to training data by sharing our content online.

However, that generational misstep presents an opportunity for rival vendors to do better, to stress their ethical credentials and business models, to use AI to increase the sum of human happiness – and hopefully, solve more serious problems than the need to pay creative people for their talent.

So, can they achieve such noble aims? The jury is still out.

One challenge is that ethical behaviour is sometimes in the eye of the beholder. As we explored in another recent report, Netherlands-based start-up LaLaLand.ai’s sincere claim to be showcasing diversity with its photo-realistic, AI-generated fashion models (sourced and trained via genuine photos online) is surely counterbalanced by the job opportunities being denied to real plus-sized, black, and Asian models as a result.

When LaLaLand’s clients include the likes of Levi Strauss, Calvin Klein, and Tommy Hilfiger – wealthy, global fashion brands that can afford to pay real models – who wins from the use of virtual ones? First, the AI company by taking those people’s income potential. And second, the megabrands by saving money. 

But what about ethnic-minority and plus-size citizens: the very people LaLaLand believes it is helping by showcasing human diversity in this way? Granted, they see faces like their own staring back from websites and billboards – a good thing; but they are not real. Meanwhile, the door of opportunity to work in the industry themselves, for those brands, has been slammed shut. A simulation of diversity, therefore; and arguably of ethics too. 

Look at it another way: the overwhelming majority of coders are white males (in STEM careers generally, 85-87%% of employees are male and 91% are white, according to multiple surveys). Those workers are now receiving money that would otherwise go to black and Asian models – mainly women. Is that ethical AI?

Other players in the visual arts seem to be making a more careful and considered effort to deploy AI ethically. For example, NVIDIA’s recent responsible AI partnership with Getty Images is partly a reaction to what Getty sees as Stability AI’s (Stable Diffusion’s) mass copyright infringement in scraping the Web. Plus, an attempt to infuse existing image-search services with AI, thus ensuring rights holders get paid.

(Any long-term experimenters with Stable Diffusion – and I am one of them – will be familiar with the appearance of what, at one time, appeared to be watermarks in some generated images. A giveaway that image-library previews of licensed content were likely among the data scraped.)

Also partnering with NVIDIA is start-up Bria, which offers a generative AI based on a library of licensed imagery. Again, the aim is to ensure that copyright holders are remunerated. The challenge in this case becomes a technical one: how to output the best, most competitive imagery from a system trained on a much smaller data set than the Web.

Meanwhile, seeing generative AI as a threat to its established imaging business, software giant Adobe is gradually infusing AI throughout the Creative Cloud, with a view to using it responsibly on licensed content. 

On the face of it, all respectable efforts.

But what about AI in the moving image (where Adobe has a presence too, of course)? 

This is a creative hotspot that rock star Peter Gabriel – always an early adopter of new technology – sought to encourage this month. He ran a Stability AI video competition while publicizing songs from his forthcoming album ‘i/o’ (input/output).

Gabriel certainly got more out of the idea than he intended. A storm of criticism from fans, concerned that he may be supporting an unethical business model, forced him to publish a statement clarifying his position, on 21 April. 

In it, he said he was “disturbed by the negative reactions” to the competition, explaining that it stemmed from a simple desire to be playful and creative with new AI tools. 

Restating his passionate commitment to artists’ rights, and to human rights in general (few would deny his track record in both regards), he added:

AI is a product of our species, and we need to find ways to build the ethics, compassion and wisdom that we value, directly into the algorithms, to protect and defend what is important to us.

I have added my name to a letter written by Max Tegmark, Steve Wozniak, and Elon Musk amongst others, to pause on the release of new AI for six months while we try and figure out what we should be doing. But if we don’t use this time to play with and learn from what we have already created, how can we hope to understand it?

A fair point. 

‘New people, people that don’t exist’

Also in this space, Victor Riparbelli is co-founder and CEO of UK-based AI start-up, Synthesia, which generates videos from text. 

The aim is to bring boring documents to life by turning them into more engaging short movies – a service he claims is already used by 35% of the Fortune 1,000. If true, ample evidence of a tactical rush to adopt generative AI among leading enterprises.

Speaking at a recent Westminster Legal Policy Forum on the risks to intellectual property from AI, the Metaverse, and tokenisation, he explained:

We live in a world where people want to watch that content, or they want to listen to it; they don't want to read anymore.

Perhaps not the win for humanity he thought it was when he said it. But in fairness to Riparbelli, he was mainly referring to technical or onboarding manuals, rather than to books and other texts in general. 

Synthesia has raised an impressive $67 million in venture capital in just six years. And from day one, the CEO has seen it as a completely ethical company. He said: 

Everything we do today is 100% fully consensual. Every day, we're only working with actors who have given full consent, and who will get who get paid every time someone makes a video [in which they appear].

On the question of data mining and data analysis […], the way we've thought about this question from the beginning has always been that we want to have 100% consensual datasets, and that's how we've built the business.

[By contrast] we know that a lot of the new text-to-speech systems have just downloaded 500,000 hours of audio off the internet to train their [products].

On the face of it, another respectable, ethical business. One that – by contrast to the internet-scrapers – honours the need for both on-camera human performers and voice actors to earn a living. Indeed, it creates new opportunities and markets for them.

But then Riparbelli said:

We're probably within the next 12 to 18 months going to be able to generate new people, people that don't exist. That is going to be an interesting part of the product for sure. 

It is also going to open up interesting questions, such as: can you own the likeness of a virtual person if we generate that person? To own the IP of that? What happens if one of our competitors tweets an exact replica of this person who doesn't exist and thereby uses their likeness?

What indeed? But, the implication of those statements suggests a direction of travel that – arguably – may not be quite as people-friendly as it first appears.

If 100% of the data it gathers is consensual rather than scraped off the internet, and if the company is paying real actors every time they appear in a video, then it would seem that those professionals may also be source training data for a coming generation of ‘synthespians’ – virtual performers who may be owned, outright, by the company. 

That is supposition, of course. But perhaps those AI-generated performers – composites of real people, in effect – will be cheaper to employ than the human actors who trained them (like LaLaLand’s models). And who knows how many enterprises will employ those synthespians in future, instead of real people? 

If true, the result – once again – may be the potential income of struggling, diverse, creative humans being taken away by heavily invested AI companies. All so that Fortune 1,000 customers can save money (as they have a right to do).

Creative careers that, like those of most musicians, photographers, painters, illustrators, designers, writers, filmmakers, and more, already demand risk, commitment, dedication, skill, and talent – a lifetime of lived human experience – with highly uncertain prospects of success, or of a viable, sustainable income. 

Now, thanks to tools that purport to be AI – but are really derivative work generators trained on creative humans’ output – companies just have to press a button.

Remember when tech CEOs told us that Industry 4.0 innovations would free us from boring tasks so we could focus on being creative? Well, take a bow, human. Your AI understudy is waiting in the wings of your creative career. And it knows all your parts. 

Indeed, a part of it is you.

My take

Welcome to the hall of mirrors that, increasingly, hosts the ethical debate on generative AI. 

Is that your own face staring back at you from the screen? And did you knowingly consent to it being there? OK. Now look again: is that an AI output that looks a bit like you – and which has taken your career? 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK