8

How Google is accelerating ML development

 1 year ago
source link: https://venturebeat.com/ai/how-google-is-accelerating-ml-development/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

How Google is accelerating ML development

google alphabet
FILE PHOTO: The brand logo of Alphabet Inc's Google is seen outside the company's office in Beijing, China, August 8, 2018. Picture taken with a fisheye lens. REUTERS/Thomas Peter
Image Credit: Reuters

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.


Accelerating machine learning (ML) and artificial intelligence (AI) development with optimized performance and cost is a key goal for Google.

Google kicked off its Next 2022 conference this week with a series of announcements about new AI capabilities in its platform, including computer vision as a service with Vertex AI vision and the new OpenXLA open-source ML initiative. In a session at the Next 2022 event, Mikhail Chrestkha, outbound product manager at Google Cloud, discussed additional incremental AI improvements including support for the Nvidia Merlin recommender system framework, AlphaFold batch inference and TabNet support. 

[Follow VentureBeat’s ongoing Google Cloud Next 2022 coverage »]

Users of the new technology detailed their use cases and experiences during the session. 

Event

Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register Here

“Having access to strong AI infrastructure is becoming a competitive advantage to getting the most value from AI,” Chrestkha said.

Uber using TabNet to improve food delivery

TabNet is a deep tabular data learning approach uses transformer techniques to help improve speed and relevancy.

Chrestkha explained that TabNet is now available in the Google Vertex AI platform, which makes it easier for users to build explainable models at large scale. He noted that Google’s implementation of TabNet will automatically select the appropriate feature transformations based on the input data, size of the data and prediction type to get the best results.

TabNet is not a theoretical approach to improving AI predictions; it is an approach that shows positive results in real-world use cases already. Among its early implementers is Uber.

Kai Wang, senior product manager at Uber, explained that a platform his company created called Michelangelo handles 100% of Uber’s ML use cases today. Those use cases include ride estimated time of arrival (ETA), UberEats estimated time to delivery (ETD) and rider and driver matching.

The basic idea behind Michelangelo is to provide Uber’s ML developers with infrastructure on which models can be deployed. Wang said that Uber is constantly evaluating and integrating third-party components, while selectively investing in key platform areas to build in-house. One of the foundational third-party tools that Uber relies on is Vertex AI, to help support ML training.

Wang noted that Uber has been evaluating TabNet with Uber’s real-life use cases. One example use case is UberEat’s prep time model, which is used to estimate how long it takes a restaurant to prepare the food after an order is received. Wang emphasized that the prep time model is one of the most critical models in use at UberEats today.

“We compared the TabNet results with the baseline model and the TabNet model demonstrated a big lift in terms of the model performance,” Wang said. 

Just the FAX for Cohere

Cohere develops platforms that help organizations benefit from the natural language processing (NLP) capabilities that are enabled by large language models (LLMs).

Cohere is also benefiting from Google’s AI innovations. Siddhartha Kamalakara, a machine learning engineer at Cohere, explained that his company has built its own proprietary ML training framework called FAX, which is now heavily using Google Cloud’s TPUv4 AI accelerator chips. He explained that FAX’s job is to consume billions of tokens and train models as small as hundreds of millions of parameters to as large as hundreds of billions.

“TPUv4 pods are some of the most powerful AI supercomputers in the world, and a full V4 pod has 4,096 chips,” Kamalakara said. “TPUv4 enables us to train large language models very fast and bring those improvements to customers right away.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK