1

Weapon of mass distraction - how much damage might AI have on democracy as 2024'...

 9 months ago
source link: https://diginomica.com/weapon-mass-distraction-how-much-damage-might-ai-have-democracy-2024s-election-frenzy-looms
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Weapon of mass distraction - how much damage might AI have on democracy as 2024's election frenzy looms?

By Chris Middleton

December 11, 2023

Dyslexia mode

election

Whatever else 2024 holds, it will be election year in many countries – the first votes to take place at truly international scale since generative AI captured the world’s imagination. 

A partial list includes: the Presidential, House, and Gubernatorial elections in the US; General Elections throughout the Americas; General, Parliamentary, or Presidential Elections in 16 African nations, including Rwanda, South Africa, and Ghana; elections in 11 Asian countries, including General Elections in India, Pakistan, Taiwan, and Indonesia; and elections of every kind in 21 European nations – including a Presidential Election in Russia. Meanwhile, a General Election is due in the UK by January 2025 at the latest. 

So, what might AI’s effects on these plebiscites be? The EU’s AI Act – agreed over the weekend – sees the technology as posing a “high risk” to democratic processes; a danger reiterated by industry organization techUK last week. Speaking at the techUK Digital Ethics Summit, Javahir Askari, its Policy Manager for Digital Regulation, said:

Alarm bells are already ringing about the potential impact of AI on democratic processes, on communities, and on society.

But what are those concerns? According to HP Dalen, an IBM Watsonx manager specializing in AI governance, we're talking about “a new threat” as “an amplifier” of disinformation and extremist views. Speaking at a panel on election safety in the generative AI age, he said:

If you don't control the output from platforms like that, it has the potential to impact and influence our elections. And that is dangerous for them across our democratic processes.

‘Control the output’? An intriguing statement, because – aside from the obvious need to stamp out disinformation and minimize hallucinations – what if the problem is that AIs sometimes present answers that conflict with some individuals’ beliefs? 

An amusing example of this occurred recently on the platform formerly known as Twitter. Supporters of X supremo Elon Musk – who famously battles what he sees as “the woke mind virus”– were dismayed to find that asking X’s own chatbot, Grok, if trans women are real women produced the answer ‘Yes’. Cue the likes of Musk acolyte Ian Miles Cheong urging supporters to “keep correcting it” until it says ‘No’. 

Arguably, therefore, the real battle in the years ahead may not be for voters’ hearts and minds, exactly, but for control over the outputs of generative AI systems. In other words, to populate them with as much data as possible that supports a preferred viewpoint.

Automated confirmation bias? Or an attempt to remove it? That depends on your beliefs. Either way, it gives the lie to any claim that generative AIs can be classed as intelligences that are free from humans’ own cognitive biases. 

Disinformation

Yet whatever your views on a variety of topics, no one denies that generative tools, Large Language Models, and cloud-based chatbots could be used as weapons of mass distraction or disinformation, enabling well-funded and organized groups – of any political hue – to push voters towards or away from different views. 

And not just within nations or among competing political parties, but also in hostile states. ‘Bad actors’ (in Western terms) already see AI’s enormous potential to destabilize enemies and shore up allies – an opportunity even bigger than the one presented by social platforms. 

With the addition of deep fakes – video, audio, images, and more – we are fast entering an era where we can no longer trust the evidence of our own eyes and ears.

A direction of travel that is aided by what many see as falling standards of honesty and ethics in some parliaments, as populist politicians focus more on the short-term expediency, eyeballs, and engagement offered by social platforms than they do on maintaining public trust in democratic processes.

Last month, the UK’s GCHQ warned that AI-enabled deep fakes pose a particular threat to public trust in the forthcoming UK election. The annual report from the National Cyber Security Centre – part of GCHQ – warned that AI will:

Almost certainly be used to generate fabricated content. AI-created hyper-realistic bots will make the spread of disinformation easier, and the manipulation of media for use in deepfake campaigns will likely become more advanced.

The point about spreading disinformation is important in itself: opinions that, on the surface, appear to be shared by large numbers of people, who may actually be armies of convincing bots and fake accounts. Meanwhile, any content shared by those accounts at scale gains a momentum of plausibility.

In turn, posts liked by bots and trolls may persuade extremists that their views are more popular than they are, encouraging them to adopt ever more extreme positions. This creates even more problems: populist politicians who are well aware of the fakes, but seize the advantage of maintaining the illusion.

The Times recently reported the existence of deep-fake videos of Sir Keir Starmer, leader of the UK's Labour Party, and London Mayor Sadiq Khan espousing extreme or offensive views. It also noted that it is easy for bad actors to use AI tools to create fake videos of, say, statues being toppled or large boats of migrants heading for the UK.

So, might some political parties see such tactics as fair game, as acceptable behaviour in the cut and thrust of our divisive, combative politics? After all, we have always been at the mercy of media barons’ political affiliations; isn’t AI merely an intensification of that process? Just another way to manipulate public sentiment and make people feel angry and afraid, or intolerant and proud? 

Henry Parker is Head of Government Affairs and Policy at vendor Logically.ai. Speaking at the techUK Summit, he said that the key problem facing trustworthy elections in the AI age is not so much individual disinformation campaigns – which might be quickly debunked, or revealed by watermarking AI’s outputs – but their cumulative effects on society:

Certainly, there is scope for a large breakdown of trust in the overall information environments in which this election takes place, as a result of the mass of disinformation campaigns.

This would be an achievement in itself for hostile states: undermining public trust in popular platforms. Parker continued:

But arguably, this is not a new problem: mis- and disinformation happens today. It's happened for thousands of years. But the broad effect [now] has been the overall pollution of the information environment.

He warned though that, above all, AI risks speeding up the spread of disinformation:

The second primary impact we might see as a result of this new technology is the – I don’t like the term – democratization of disinformation. To illustrate what I mean, take something like the 2016 US Presidential election, where there was a direct attempt to manipulate it through a disinformation campaign from Russia. It cost $12 million and took a building of 400 staff to deliver. 

But what we are now seeing, with the mass availability of these tools, is the ability for a campaign like that to be done much more quickly, much more cheaply, and much more efficiently than previously. And there are proofs of concept out there.You can now use generative AI to not just produce fake content, but also to use it end to end. So, to produce the content, attach [fake] audio and video, post it on social media, and refer users back to a website that is entirely AI generated.”

According to Parker, such campaigns could be run today for under $1,000 by individuals or small groups – as opposed to, say, the reported $12 million and 400 people in the Russia-backed campaign of seven years ago.  The risks of AI enhancing hostile attacks or aggressive party-political campaigns are not hypothetical, he suggested:

We worked directly with the Slovakian government on their election recently [a General Election was held in September 2023, leading to a Smer-SD-led coalition]. We tracked what happened with the defects there and advised them how to manage that. 

I will say that was the first election we have seen where there was a demonstrable attempt to circulate fake [AI-generated] content. Whether it was coordinated or not, it was certainly circulated. And it did play a degree of a role in, at least, what the discourse was about in that election.

Influencing

So, while AI-enabled disinformation might not win elections at this stage, it is already helping nudge voters towards, or away from, whichever debates politicians believe will influence them. In much the same way that countless ‘think tanks’ have sprung up in recent years, using eminent-sounding names as cover for direct political action.

Areeq Chowdury is Head of Policy, Data & Digital Technologies, for the Royal Society, the UK’s national academy of sciences. Last year, the organization published a report on AI-enabled disinformation and deep fakes. Chowdury explained:

The TLDR on that is the vast majority of the public could not detect a high-quality deep fake, with or without a content warning.

So, are flags about data provenance actually a solution to understanding content’s origins or revealing how it was manipulated? It’s not as simple as that, he explained. Even cropping an image, applying a filter, or editing a single pixel could lead to an image being flagged as unreliable. 

In short, trust in images that are genuine – in the sense of them documenting real events – can easily be undermined too. Equally, important details and context can be cropped out of real images to make them tell a different, misleading, or partial story: something that has been possible since the dawn of photography, of course.

Throw in AI, and you open a Pandora’s Box of challenges that might push voters away from digital platforms entirely. In other words, as our channels become so full of sewage, in effect – deliberately or cumulatively – we may stop swimming in them to protect our mental health, relationships, and democratic processes. 

But the problem there, of course, is that government itself is becoming more digital; with politicians seeing technology as offering opportunities for more transparency, engagement, and accountability, not less. 

Chowdury admitted to being pessimistic about the future, because researchers’ warnings about these problems have largely been ignored by governments – administrations that it could be argued are sometimes overly keen to court Big Tech investment and patronage. He said:

I have a pretty bleak view, actually, on next year's elections and the sorts of challenges we’ll face around disinformation. Since 2016, which is the last time that people cared about it – because of Brexit and Trump, in many ways – we've actually gone backwards in terms of approaches to tackling disinformation. 

First of all, the challenges have got harder. We've now got ChatGPT, so it's much easier to generate some of this content. And we’ve got more sophisticated deep fakes and cheap fakes.

With audio deep fakes emerging too – in May, US news reports broke of scammers using AI clones of loved-ones’ voices to con families out of money – any research on these tools’ effectiveness, or their impact on next year’s elections, won’t emerge until after the fact. By which time, the technology will have moved on and become even more sophisticated. Chowdury noted:

We don’t know exactly how much impact this stuff will have, but we do know it's inherently a bad thing for democracy. We have good researchers looking into it, who previously had access to data. But much of that access has been removed by a lot of companies for various reasons. 

On top of that, we’re seeing legal action against researchers who revealed harms on social media platforms. So now, if you research disinformation on a major platform, you risk your livelihood, or maybe you go bankrupt or face criminal prosecution.

Chowdury did not specify which cases he was referring to, but added:

It’s a very bleak position. […] But at least we are much more aware, I think, as a society.

Downbeat

At this point,  you could normally rely on a vendor to inject some optimism. But even Stefanie Valdes-Scott, EMEA Director for Government Policy and Relations at Adobe – a company that is doing good work in AI trust and licensed content – seemed downbeat at the immediate prospects:

Once people know that deep fakes are out there, they tend not to trust anything anymore. And that also has consequences for democratic discourse, and people [not] believing political communications and the media. That's why Adobe was one of the founding members of the Content Authenticity Initiative, which promotes an open standard for content provenance mechanisms.”

A positive initiative. But as we have seen, even authenticated content can be misused. So, did any of the panel offer solutions to these problems, or just a collective shoulder-shrug at the looming damage to citizen trust in democracy? IBM’s HP Dalen said:

We promote openness, and we are obviously a business-to-business supplier. When we do our foundation models, they come with full openness around the algorithms. So, openness is clearly part of [the solution]. And in order for digital watermarks to work, for instance, you need to have that openness.

He added:

If I was a politician, I would get my own blockchain. Seriously, I would.

(A good idea, perhaps – assuming that the politician in question has any intention of being factual and transparent, of course...)

Logically.ai’s Parker said:

Digital watermarking and content provenance are certainly part of the solution, but I want to emphasize, only part of it. We look at this problem in a different way: not necessarily as being a content provenance issue, but a dissemination one. It's really difficult to see how you could possibly authenticate every single piece of content – it's like water in the ocean. 

So, what we are trying to do ourselves is use AI as a tool to look at online behaviours around the circulation of disinformation. The tactics, techniques, and procedures behind people who want to run these types of campaigns. It's not completely fool-proof, and we have to employ a qualified team of human analysts. But we try to think about using AI itself as a safe tool in this space, because it can be. It’s not just the problem. It can also be the solution.

That's a message we hear increasingly often - that the answer to AI’s problems is more AI. But he added:

Nobody has yet worked out a way of authenticating whether a piece of text is AI-generated or not. In fact, OpenAI shut down a project to do that because even they don't know how to do it!

Adobe’s Valdes-Scott offered a different perspective – perhaps the only workable one for the foreseeable future:

Trying to restore trust in our digital world by establishing a global gold standard, one that leverages the power of provenance to help creators prove what's true [is part of the solution]. But we're changing the way to look at it.

By saying, ‘Let's give the good actors out there the technical solution to authenticate their work’, rather than try to capture all the bad guys and try to detect all the deep fakes.  Because the bad actors don’t care, right? People who have the intention to deceive – who have the intention of creating and, especially, spreading disinformation – will continue to do so.

My take

Indeed. And as some politicians’ behaviour on platforms like X reveals, from time to time it is they who are the bad actors. Not generating the fake content themselves, perhaps, but seeing the short-term advantage of spreading it to engage – or enrage – their voter bases. 

So, perhaps the only workable solution is better, more honest politicians.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK