11

Google upgrades Bard with technology from its cutting-edge PaLM language model

 1 year ago
source link: https://siliconangle.com/2023/03/31/google-upgrades-bard-technology-cutting-edge-palm-language-model/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Google upgrades Bard with technology from its cutting-edge PaLM language model

bard-1.png
AI

Google LLC has enhanced its Bard chatbot’s capabilities using technology from PaLM, an advanced language model that it debuted last year.

Google and Alphabet Inc. Chief Executive Officer Sundar Pichai detailed the update in a New York Times interview published today. PaLM will “bring more capabilities; be it in reasoning, coding, it can answer maths questions better,” Pichai said. Jack Krawczyk, the product manager for Bard at Google, added in a tweet that the update has already been released. 

The new version of Bard is described as being more adept at solving math problems. The chatbot can answer “multistep” text prompts more reliably as well, Krawczyk stated. Further down the line, Google also expects improvements in Bard’s ability to generate software code. 

PaLM, the language model that the search giant used to enhance Bard, was first detailed by its researchers last year. The model features 540 billion parameters, the configuration settings that determine how a neural network goes about processing data. The more parameters there are, the more tasks a neural network can manage.

The PaLM model demonstrated impressive performance in a series of internal evaluations carried out by Google. During one test that involved 28 natural language processing tasks, it achieved a higher score than OpenAI LP’s GPT-3 model. It also set new records in two math and coding benchmarks.

Google trained PaLM on two TPU v4 Pods hosted in its public cloud. Each TPU v4 Pod includes 4,096 chips optimized specifically to run AI workloads. Combined, those chips can provide up to 1.1 exaflops of performance, which equals 1.1 million trillion calculations per second.

During the development of PaLM, Google managed the AI training process using an internally developed software system called Pathways. The system distributes the computations involved in training an AI model across multiple chips to speed up the workflow. When running PaLM, Pathways used 57.8% of the underlying chips’ processing performance, which Google says set a new industry record.

The original version of Bard that Google introduced last month was based on an AI called LaMDA. Google first detailed LaMDA last January, or three months before it debuted PaLM. The former model supported up to 137 billion parameters at the time of its introduction while PaLM features 540 billion.

“We clearly have more capable models,” Pichai told the Times in reference to LaMDA. “To me, it was important to not put [out] a more capable model before we can fully make sure we can handle it well.” 

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

Click here to join the free and open Startup Showcase event.

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.

Click here to join the free and open Startup Showcase event.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK