5

How AI like ChatGPT could be used to spark a pandemic - Vox

 1 year ago
source link: https://www.vox.com/future-perfect/2023/6/21/23768810/artificial-intelligence-pandemic-biotechnology-synthetic-biology-biorisk-dna-synthesis
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

This story is part of a group of stories called

Finding the best ways to do good.

Here’s an important and arguably unappreciated ingredient in the glue that holds society together: Google makes it moderately difficult to learn how to commit an act of terrorism. The first several pages of results for a Google search on how to build a bomb, or how to commit a murder, or how to unleash a biological or chemical weapon, won’t actually tell you much about how to do it.

It’s not impossible to learn these things off the internet. People have successfully built working bombs from publicly available information. Scientists have warned others against publishing the blueprints for deadly viruses because of similar fears. But while the information is surely out there on the internet, it’s not straightforward to learn how to kill lots of people, thanks to a concerted effort by Google and other search engines.

vox-mark

Sign up for the newsletter Future Perfect

Each week, we explore unique solutions to some of the world's biggest problems.

Email (required)
By submitting your email, you agree to our Terms and Privacy Notice. You can opt out at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. For more newsletters, check out our newsletters page.

How many lives does that save? That’s a hard question to answer. It’s not as if we could responsibly run a controlled experiment where sometimes instructions about how to commit great atrocities are easy to look up and sometimes they aren’t.

But it turns out we might be irresponsibly running an uncontrolled experiment in just that, thanks to rapid advances in large language models (LLMs).

Security through obscurity

When first released, AI systems like ChatGPT were generally willing to give detailed, correct instructions about how to carry out a biological weapons attack or build a bomb. Over time, Open AI has corrected this tendency, for the most part. But a class exercise at MIT, written up in a preprint paper earlier this month and covered last week in Science, found that it was easy for groups of undergraduates without relevant background in biology to get detailed suggestions for biological weaponry out of AI systems.

“In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization,” the paper, whose lead authors include MIT biorisk expert Kevin Esvelt, says.

To be clear, building bioweapons requires lots of detailed work and academic skill, and ChatGPT’s instructions are probably far too incomplete to actually enable non-virologists to do it — so far. But it seems worth considering: Is security through obscurity a sustainable approach to preventing mass atrocities, in a future where information may be easier to access?

In almost every respect, more access to information, detailed supportive coaching, personally tailored advice, and other benefits we expect to see from language models are great news. But when a chipper personal coach is advising users on committing acts of terror, it’s not so great news.

But it seems to me that you can solve the problem from two angles.

Controlling information in an AI world

“We need better controls at all the chokepoints,” Jaime Yassif at the Nuclear Threat Initiative told Science. It should be harder to induce AI systems to give detailed instructions on building bioweapons. But also, many of the security flaws that the AI systems inadvertently revealed — like noting that users might contact DNA synthesis companies that don’t screen orders, and so would be more likely to authorize a request to synthesize a dangerous virus — are fixable!

We could require all DNA synthesis companies to do screening in all cases. We could also remove papers about dangerous viruses from the training data for powerful AI systems — a solution favored by Esvelt. And we could be more careful in the future about publishing papers that give detailed recipes for building deadly viruses.

The good news is that positive actors in the biotech world are beginning to take this threat seriously. Ginkgo Bioworks, a leading synthetic biology company, has partnered with US intelligence agencies to develop software that can detect engineered DNA at scale, providing investigators with the means to fingerprint an artificially generated germ. That alliance demonstrates the ways that cutting-edge technology can protect the world against the malign effects of ... cutting-edge technology.

AI and biotech both have the potential to be tremendous forces for good in the world. And managing risks from one can also help with risks from the other — for example, making it harder to synthesize deadly plagues protects against some forms of AI catastrophe just like it protects against human-mediated catastrophe. The important thing is that, rather than letting detailed instructions for bioterror get online as a natural experiment, we stay proactive and ensure that printing biological weapons is hard enough that no one can trivially do it, whether ChatGPT-aided or not.

A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!

You've read 1 article in the last 30 days.

Explanatory journalism is a public good

At Vox, we believe that everyone deserves access to information that helps them understand and shape the world they live in. That's why we keep our work free. Support our mission and help keep Vox free for all by making a financial contribution to Vox today.

One-Time

Monthly

Annual

$95/year

$120/year

$250/year

$350/year

Other

We accept credit card, Apple Pay, and Google Pay. You can also contribute via


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK