5

Microsoft reports details AI use by China, Russia, North Korea, Iran - The Washi...

 7 months ago
source link: https://www.washingtonpost.com/technology/2024/02/14/us-adversaries-using-artificial-intelligence-boost-hacking-efforts/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

U.S. adversaries said to be using AI to boost their hacking efforts

In a new report, Microsoft says Russia, China, Iran and North Korea have all used AI to improve their abilities

February 14, 2024 at 7:00 a.m. EST
us-adversaries-using-artificial-intelligence-boost-hacking-efforts
Microsoft says in a report that it has detected state-sponsored hacking groups using artificial intelligence. (Joan Mateu Parra/AP)
Listen
Share
Comment

Russia, China and other U.S. rivals are using the newest wave of artificial intelligence tools to improve their hacking abilities and find new targets for online espionage, according to a report Wednesday from Microsoft and its close business partner OpenAI.

While computer users of all stripes have been experimenting with large language models to help with programming tasks, translate phishing emails and assemble attack plans, the new report is the first to associate top-tier government hacking teams with specific uses of LLM. It’s also the first report on countermeasures and comes amid a continuing debate about the risks of the rapidly developing technology and efforts by many countries to put some limits on its use.

The document attributes various uses of AI to two Chinese government-affiliated hacking groups and to one group from each of Russia, Iran and North Korea, comprising the four countries of foremost concern to Western cyber defenders.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft wrote in a summary of its findings.

Advertisement

Microsoft said it had cut off the groups’ access to tools based on OpenAI’s ChatGPT. It said it would notify the makers of other tools it saw being used and continue to share which groups were using which techniques.

The company said it had not found any major AI-powered attacks, but had seen earlier-stage research on specific security flaws, defenses and potential targets.

Sherrod DeGrippo, Microsoft’s director of threat intelligence strategy, acknowledged that the company would not necessarily see everything that followed from that research and that cutting off some accounts would not dissuade attackers from creating new ones.

“Microsoft does not want to facilitate threat actors perpetrating campaigns against anyone,” she said. “That’s our role, to hit them as they evolve.”

Among the state-sponsored hacking groups identified in the report:

  • A top Russian team associated with the military intelligence agency GRU used AI to research satellite and radar technologies that might be relevant to conventional warfare in Ukraine.
  • North Korean hackers used AI to research experts on the country’s military capabilities and to learn more about publicly reported vulnerabilities, including one from 2022 in Microsoft’s own support tools.
  • An Islamic Revolutionary Guard Corps team in Iran sought AI help to find new ways to deceive people electronically and to develop ways to avoid detection.
  • One Chinese government group explored using AI to help create programs and content, while another Chinese group “is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs,” Microsoft wrote.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK