4

When your teammate is a machine: 8 questions CISOs should be asking about AI

 1 year ago
source link: https://www.csoonline.com/article/648380/when-your-teammate-is-a-machine-how-cisos-are-learning-to-embrace-ai.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Contributing Writer

When your teammate is a machine: 8 questions CISOs should be asking about AI

Feature
Aug 03, 20239 mins
CSO and CISOData and Information SecurityGenerative AI

The inevitability of AI is forcing many cybersecurity leaders to decide if it's friend or foe. Treating it as a teammate may be the ultimate solution, but there are a number of pointed questions CISOs should be asking.

Wired brain illustration - next step to artificial intelligence
Credit: Shutterstock

Artificial intelligence is changing the way we do just about everything -- everywhere we turn, machines are performing tasks that in the past would have been performed by a human. These AI-powered instances run the gamut from autonomous vehicles to customer service bots that must be navigated before a human comes on the line. In cybersecurity, AI has quickly become both a friend and a force multiplier for adversaries. Like it or not, seeing the machine as a teammate has become a reality that CISOs will have to learn to embrace, but they should be asking a number of questions before taking on an AI sidekick.

The concept is not new. In 2019, an international team of 65 collaborative scientists generated 819 research questions on the topic with the intent "to provide a research agenda that collaboration researchers can use to investigate the anticipated effects of designed machine teammates based on the qualified opinions of collaboration researchers." No doubt, some of the research points evolved from the team of collaborative scientists found their way into the US Department of Defense responsible AI principles and guidance, which captured five data points that any AI must be before being acceptable for use: responsible, equitable, traceable, reliable, and governable.

Letting an AI be your wingman

To envision the concept of AI as teammate in action, one need only look at the US Air Force’s plan to enhance its F-35 multirole combat aircraft's effectiveness by pairing it up with battle drones that function as autonomous wingmen. Working with drones enhanced with AI, the aircraft can amass information at speeds beyond human capabilities. This makes "movement through the observe, orient, decide, act (OODA) loop with speed and agility, which in turn allows the recipient of real-time information to be more adroit," according to J.R. Seeger, a retired CIA officer and novelist.

AI will effectively become an extension of automation processes and can uncover a vastly expanded breadth and span of information, helping to evaluate complexities at greater and greater speeds, says  StrikeReady CEO Anurag Gurtu. "AI works best when the CISO is looking to enhance their productivity, augment the capabilities of a skilled analyst, offload a portion of the workload, and retain employees," Gurtu says.

With AI the speed of decision-making is king

While it may often feel as if we have our foot on the "pedal to the metal and no brakes," Gurtu says, "AI also assists in the ability to exercise process at velocity and enhances the detection chore and may be tuned to provide the analyst with an event probability of being targeted or attacked."

In the past, decision trees and rules-based models made threat and vulnerability detection a fairly laborious process, but "with AI we can bring in disparate data sets and improve the analyst's 'explainability'," Gurtu says, adding that local Interpretable model-agnostic explanations (LIME) and SHAP (Shapley Additive exPlanations) both help with the explainability chore.

"More and more entities are incorporating generative AI and they must be prepared for an uptick in 'hallucinations' and as more do so, massive hallucinations are coming," Gurtu says. The means to avoid hallucinations in the results of generative AI is the use of a graph AI language model, he says.

To illustrate the point, one need only look at a recent lawyer's brief submitted to a court written with the assistance of an AI chatbot that 'hallucinated' nonexistent case law when it could find no real-world examples. This resulted in the judge issuing a standing order that any brief created using AI be so identified and verified by a human. "Utilizing the graph methodology, the AI gives the user extreme power to understand with context," Gurtu says. "Without such, as noted [the result is] massive hallucinations."

Machine teammates will need to be compatible with people

Virtually all sectors will be eventually affected by AI and find themselves with a machine as a teammate. In a Frontiers in Psychology article published in August 2022 the authors noted that effective teamwork must be in place for success in human teams. "Factors such as leadership, conflict resolution, adaptability, and backup behavior, among many others, have been identified as critical aspects of teamwork supporting team outcomes."

Extrapolating that to address future human-machine teams, the authors said that "will depend, in part, on machine agents that have been designed to successfully facilitate and participate in teamwork with human teammates."

It is within the context of AI that trust continues to be a major consideration. How many entities will ensure that the responsibilities of the chief trust officer include the ethical, moral, and responsible use of AI in products and engagements? When the AI makes an error, who reports the error? Who corrects the error? How does one go about measuring trust in the relationship between the machine and the human teammates?

Questions every CISO needs to ask about AI

There are many potential benefits that can flow from incorporating AI into security technology, according to Rebecca Herold, an IEEE member and founder of The Privacy Professor consultancy: streamlining work to shorten finish times for projects, the ability to make quick decisions, to find problems more expeditiously.

But, she adds, there are a lot of half-baked instances being employed and buyers "end up diving into the deep end of the AI pool without doing one iota of scrutiny about whether or not the AI they view as the HAL 9000 savior of their business even works as promised."

She also warns that when "flawed AI results go very wrong, causing privacy breaches, bias, security incidents, and noncompliance fines, those using the AI suddenly realize that this AI was more like the dark side of HAL 9000 than they had even considered as being a possibility."

Eight questions every CISO should ask about AI

To avoid having your AI teammate tell you, "I'm sorry, Dave, I'm afraid I can't do that," when you are asking for results that are accurate, non-biased, privacy-protective, and in compliance with data protection requirements, Herold advises that every CISO ask eight questions:

  1. Did comprehensive testing to ensure the AI algorithm works as intended occur? Ask the manufacturer and/or vendor for documentation that confirms such testing and confirm the standards and/or frameworks used. For example, NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0).
  2. From where did the data used to train the AI come? If it includes personal data, the associated individuals must provide their consent to use their personal data for such purposes.
  3. How was the AI algorithm designed to prevent, or mitigate as much as possible, bias in the results? Ask to see documented results.
  4. How was the algorithm designed to mitigate the new and challenging risks that emerge almost daily related to generative AI? Ask to see documentation about their plan to manage this on an ongoing basis.
  5. Has the vendor comprehensively addressed security concerns related to machine learning and if so, how? Ask to see documented policies and procedures.
  6. Has the AI been engineered to account for the complexity of AI systems attack surfaces and if so, in what ways? Ask for documentation to validate the information provided.
  7. How have supply chain and third-party AI components been reviewed for security and privacy risk, and then mitigated? Make sure there is an ongoing AI supply chain and third-party risk management process in place.
  8. Has the AI manufacturer or vendor developed their AI products to meet data protection compliance for the areas in which they will be sold?

It is not enough, Herold says, to trust what the sales team is saying, one must develop the ability to ferret out the answers to the tough questions or find third parties who are both trustworthy and competent.

Humans must guide when AI is a partner

When a machine is a teammate, a human must own the accountability and responsibility of the machine's decisions, says investor and speaker Barry Hurd. "Working with AI teammates will require specialized talents to optimize the working relationship and not break things," he says. "Humans are not built to operate with the same tolerances as a machine. If we think of a science fiction movie where a mechanical arm is indestructible compared to a weak human body, our logic and decision-making capabilities have similar frailty compared to the processing speed of an AI team member."

Machines multiply our actions, whether they are right or wrong, Hurd notes. "The scale and speed need to be in balance with our reaction time as humans to preserve ethical, legal, and moral time to action. AI at scale means the potential for collateral damage at scale across a wide range of departmental areas."

That will create challenges in deciding the number of human failsafe layers required to give anyone operating with an AI system time to consider what's acceptable. "Once the decision to act has been made, the resulting action will be over before we can second guess what just happened," Hurd says.

"However, paired with a talented group of human partners who understand where effective multiples can be achieved there can be a symbiotic relationship where critical thinking, subject matter expertise, and morality are in balance with calculated action and scaled automation. Risk can be minimized, and effectiveness multiplied at that point. The best humans will enable the best technology and vice versa."

It stands to reason that when an AI teammate has made a decision, the human teammate "needs to be able to explain why a decision was made," says Gurtu.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK