6

The downside of going gangbusters on AI - new forms of security risk

 6 months ago
source link: https://diginomica.com/downside-going-gangbusters-ai-new-forms-security-risk
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The downside of going gangbusters on AI - new forms of security risk

By Martin Banks

February 20, 2024

Dyslexia mode

Hand using laptop with cyber security padlock symbol projecting in front of screen © Rawpixel.com - Shutterstock

It is common for tech conferences to provide delegates with some flashes of insight that might strike useful chords with them from the great, the good and the knowledgeable. They rarely fail to be interesting, but sometimes a little gem turns up.

This was the case at the recent Dynatrace Perform event at the second-day keynote 'fireside chat' between Sandi Larsen, the company’s VP of Global Security Solutions, and Maria Markstedter, CEO and founder of Azeria Labs.

Markstedter has built up an extensive track record in working on and around ARM processor security as well as reverse engineering technologies in that area. Given that ARM technology plays a significant part in the design and operations of GPUs which have become the beating heart of most AI implementations, it does mean she has become a leading AI security expert as well. 

This is an area worthy of serious consideration by businesses for it comes with some new and different complexities to the general view of security that open new attack vectors and new types of attack. From Markstedter’s high level overview of the subject at the conference it is already possible to see AI security soon becoming a new professional specialism which at least one member of each security team in any business will need to acquire.

One of the key factors here is the fact that the majority of AI users are likely to end up with several different types of AI, with high likelihood that composite AI systems will end up collaborating on the increasingly complex business processes that will become possible. But that collaboration will open new doors to attackers and the fact that the AI systems will be working together, with output from one becoming input for another, the possibility is high that new ways of injecting malicious data and code can be found, and do it in ways that the AI systems can be used to mask what is happening.

Teaching and training end-user staff in these types of skill is Azeria Labs’ main occupation, and with her background in ARM technologies and security, Markstedter realized that there weren't enough resources available for security researchers to learn from. In her view this is already a huge problem because of the penetration ARM has - not only GPUs but its’ even longer-standing presence as the processor technology underlying most IoT and mobile devices. For security researchers, this is something of a black hole that really needs to be filled in.

A need to know

That need also has to be set against what she called the flood of use cases now being developed and introduced around generative AI, with so many users already jumping on the bandwagon. Meanwhile, the potential for new security threats has barely even been considered, let alone suitable tools and defences developed and made widely available, while in-depth knowledge of the subject is very thin on the ground.

This was the subject of Larsen’s first question to Markstedter - the change in skill sets that is going to be required of security teams, if they are to analyze the security of AI systems, detect attacks and prevent them:

That's the real challenge, because the ability to identify abuse and misuse of these AI systems within your respective product or platform will be the responsibility of your security team, not your data analysts. We're talking about a completely different system here, so we're talking about new attacks, new types of vulnerabilities. And all of these new attacks and vulnerabilities require you to have an understanding of data science, and how AI systems work but at the same time, a very deep understanding of security, threat modelling and risk management. Because you can't find vulnerabilities in a system that you don't fully understand. 

The biggest problem is that we don't have enough people with the skill set required to analyse the systems for security vulnerabilities or to secure the integrations, the skill set is split between the data scientists and the security professionals.

Markstedter spoke about the speed of evolution that such staff will have to deal with. This has been significant in the year that generative AI has become widely available and that she sees this pace increasing as the user community takes to the change from uni-modal to multi-modal AI. This brings the possibility of simultaneous analysis of different data types from different data streams.  The performance and productivity boosts possible are huge: but so are the complexities and the opportunities to exploit points of entry for malware. 

She observed that in the year it has been available, ChatGPT has become limited by the fact that it can only analyse one type of data at a time. So it can analyze the text on a website, but that only gives a partial understanding of it because it cannot analyse, concurrently and contextually,  any of the images that complement the text on the same website:

Multimodal AI, on the other hand, specializes on in multiple modalities of data. it's in combining text, audio and video to get a richer and fuller understanding of the information. And this is actually much closer to how we humans perceive the world because we don't just rely on one sense, we usually combine sights and sounds in order to understand our environment.

She took this line of thought further, imagining a couple of possible use case scenarios. How about an AI app with multiple capabilities that sits in on a Zoom meeting, but not only transcribes what is being said, but also analyzes the tone of voice for sentiment and the visual aspects of the speaker for body language? It could then process all of these types of different data simultaneously to create a comprehensive report on the trustworthiness of that third party. At the other extreme, it could even just answer incoming user questions live via text or voice. 

Tools and tactics

The flipside of this good stuff, of course, comes with a downside – the security implications, with a whole new range of areas for concern opening up, many of which are geared to the growing potential of multimodal operations. This, Markstedster pointed out, will require agents to not only analyze multiple modalities, like multiple types of data and apply multimodal reasoning, but it will also need access to a multitude of business data. This means access and identity management has to be reevaluated because this is a world where non-deterministic systems have access to a multitude of business data and applications - and has the authorization to perform non-deterministic actions.

Larsen asked whether this creates new attack surfaces that arise out of these new systems and what changes should organisations be prepared to make?

This, Markstedter suggested, requires a significant re-think of the fundamentals of data security, because model data is just data at the end of the day, and this needs to be protected just as much as business-sensitive data. Attacking a model through its data inputs is an important way of exploiting access and authorisation. A model can be attacked through the data it processes internally or the data it fetches externally, she explained 

We also need to keep in mind that model data has periodically been updated from external sources, and these external sources can be an open source data set. So it has to be trusted. And another problem with these data stores is that they are actually designed to handle real time data streams, which basically means that they can update their contents and understanding whenever new data becomes available. 

Say you use an AI agent to process your emails and you organise your schedule around it. It's very easy for an attacker to manipulate an image and have the AI agent misinterpret an image that seems harmless to our eyes, but actually contains malicious instructions.

In terms of new tooling and security analytics this means, above all, businesses thinking hard and fast about what they may need to do because innovation in AI is moving so fast right now and many users risk being left behind. New tooling, and a re-think of data analytics, are necessary because gen AI can also be combined with other AI systems to form composite AI systems. These, Markstedter said, can be much smarter and more capable than any AI technology working alone. That is the danger area, but can also be used as the basis of any defence.

The major takeaways she suggested users should consider included being fast yet careful about new tooling:

Take advantage of the new tools that are being created, but also be very careful about integrating them way too soon. Make sure you actually threat model this whole thing. And more importantly, you need to educate your security team. Because we need to understand the technology that is changing our systems in order to combat these new security challenges.

She also suggested having an `AI Red Team’ that continuously stress tests AI integrations and systems:

But I realize this is a big ask, especially because we right now we don't have enough people with the skills to fill all the AI routine jobs.

As in interim position, she suggested taking half of the data scientists and half the security team and get them work together to assess the risks and threat modelling of AI integrations:

As a first step, have your data scientists create a report or lay out all the data streams, access and everything, and then work with your security team to develop a threat model around it, and have them really rethink data security and access management. They do need to continuously work together. And more importantly, if you do decide to integrate an AI agent into your enterprise, then have your security team treat it like an insider threat when threat modelling - because that's what it can be turned into.

My take

The other side of the hype about what AI can most certainly achieve is the potential security disaster it brings with it, because it really is very different from anything  security teams have faced before. It does require the evolution of a new class of specialist – half security expert and half data scientist – with the whole being more valuable than the sum of the parts. And at a time when there are not enough people with the skills to adequate cover resource needs in AI and security individually, this really is a potentially huge spanner hanging over the AI bandwagon.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK