2

The cabal of AI experts helping Rishi Sunak to avoid an apocalypse

 11 months ago
source link: https://www.telegraph.co.uk/business/2023/09/23/artificial-intelligence-safety-summit-sunak-ai-experts/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

‘This is his climate change’: The experts helping Rishi Sunak seal his legacy

Several organisations are advising the prime minister on handling the dystopian threat of AI

By James Titcomb

23 September 2023 • 7:00am
Rishi Sunak

Rishi Sunak wants Britain to lead on AI safety

Credit: IAN VOGLER/POOL/AFP via Getty Images

It took just 23 words for the world to sit up and pay attention. In May, the Center for AI Safety, a US non-profit, published a one-sentence statement warning that artificial intelligence should be considered an extinction risk alongside pandemics.

Those who endorsed the statement included: Geoffrey Hinton, known as the Godfather of AI; Yoshua Bengio, whose work with Hinton won the coveted computer science Turing prize; and Demis Hassabis, the head of the Google-owned British AI lab Deepmind.

The statement helped to transform public opinion on AI from seeing it as a handy office aide to a potential threat of the kind usually only seen in dystopian science fiction.

The Center itself describes its mission as reducing the “societal-scale risks from AI”. It is now one of a handful of California-based organisations advising Rishi Sunak’s government on how to handle the rise of the technology.

In recent months, observers have detected an increasingly apocalyptic tone in Westminster. In March, the Government unveiled a white paper promising not to “stifle innovation” in the field. Yet just two months later, Sunak was talking about “putting guardrails in place” and pressing Joe Biden to embrace his plans for global AI rules.

Sunak’s legacy moment

An AI safety summit at Bletchley Park in November is expected to focus almost entirely on existential risks and how to negate them.

Despite myriad political challenges, Sunak is understood to be deeply involved in the AI debate. “He’s zeroed in on it as his legacy moment. This is his climate change,” says one former government adviser.

In November, Bletchley Park will host Prime Minister Rishi Sunak's AI Safety Summit

In November, Bletchley Park will host Prime Minister Rishi Sunak's AI Safety Summit

Credit: Simon Walker / No 10 Downing Street

In the last year, Downing Street has assembled a tight-knit team of researchers to work on AI risk. Ian Hogarth, a tech investor and the founder of the concert-finding app Songkick, was enlisted as the head of a Foundation Model taskforce after penning a viral Financial Times article warning of the “race to God-like AI”.

This month, the body was renamed the “Frontier AI taskforce” – a reference to the bleeding edge of the technology where experts see the most risk. Possible applications could include creating bioweapons, for example, or orchestrating mass disinformation campaigns.

Human-level AI systems ‘just a few years away’

Hogarth has assembled a heavyweight advisory board including Bengio, who has warned that human-level AI systems are just a few years away and pose catastrophic risks, and Anne Keast-Butler, the director of GCHQ. A small team is currently testing the most prominent AI systems such as ChatGPT, probing for weaknesses.

Hogarth recently told a House of Lords committee that the taskforce is dealing with “fundamentally matters of national security”.

“An AI that is very capable of writing software… can also be used to conduct cybercrime or cyberattacks. An AI that is very capable of manipulating biology can be used to lower the barriers to entry to perpetrating some sort of biological attack,” he said.

Leading preparations for the AI summit are Matt Clifford, an entrepreneur who chairs the Government’s blue-sky research lab Aria, and Jonathan Black, a senior diplomat. The pair, who have been dubbed Number 10’s AI “sherpas”, were in Beijing last week in order to drum up support for the summit.

Meanwhile, the research organisations now working with the taskforce have raised eyebrows for their links to the effective altruism (EA) movement, a philosophy centred around maximising resources for the best possible good.

The movement has become controversial for concentrating on long-term but unclear risks such as AI – judging that the lives of people in the future are as valuable as those in the present – and for its close association with FTX, the bankrupt cryptocurrency exchange founded by the alleged fraudster Sam Bankman-Fried.

Of the six research organisations working with the UK taskforce, three – The Collective Intelligence Project, the Alignment Research Center, and Redwood Research – were awarded grants by FTX, which dished out millions to non-profits before going bust. (The Collective Intelligence Project has said it is unsure if it can spend the money, The Alignment Research Center returned it, while Redwood never received it).

One AI researcher defends the associations, saying that until this year effective altruists were the only ones thinking about the subject. “Now people are realising it’s an actual risk but you’ve got these guys in EA who were thinking about it for the last 10 years.”

No guarantee tighter regulation will yield results

Those close to the taskforce are said to have brushed off a recent piece in Politico, the Westminster-focused political website, that laid out the strong ties to EA. It focused on the controversial aspects of the movement but, as a source close to the process says: “The inside joke is that they’re not effective or altruists.”

Still, start-ups have raised concerns that the focus on existential risk could stifle innovation and hand control of AI to Big Tech. One lobbyist says that, counterintuitively, this obsession with risk could concentrate power in the hands of major AI labs such as OpenAI, the company behind ChatGPT, DeepMind and Anthropic (the bosses of the three labs held a closed-door meeting with Sunak in May).

Rishi Sunak meeting with Demis Hassabis, chief executive of DeepMind, Dario Amodei, chief executive of Anthropic, and Sam Altman, chief executive of OpenAI, in 10 Downing Street in May

Credit: Simon Walker / No 10 Downing Street

Hogarth has insisted these companies cannot be left to “mark their own homework”, but if government safety work ends up with something like a licensing regime for AI models, they are the most likely to benefit. “What we are witnessing is regulatory capture happening in real time,” the lobbyist says.

Baroness Stowell, the chair of the Lords communications and digital committee, has written to the Government demanding details on how Hogarth is managing potential conflicts of interests around his more than 50 AI investments, which include Anthropic and defence company Helsing.

There is no guarantee that the current push for tighter regulation will yield results. Other past efforts have fallen by the wayside. Last week it emerged that the Government had disbanded the Centre for Data Ethics and Innovation Advisory Board, created five years ago to address areas such as AI bias.

However, those close to the current process believe the focus in Downing Street is now sharper. And to the clutch of researchers working on preventing the apocalypse, the existential risks are more important than other considerations.

“It’s a big opportunity for global Britain, a thing that the UK can actually lead on,” says Shabbir Merali, who developed AI strategy at the Foreign Office and now works at the think tank Onward. “It would be strange not to focus on existential risk - that’s where you want nation state capability to be.”


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK