2

Nvidia banking on TensorRT to expand generative AI dominance

 11 months ago
source link: https://www.theverge.com/2023/10/17/23920945/nvidia-gpus-tensor-llms-ai
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Nvidia banking on TensorRT to expand generative AI dominance

/

The company is adding its TensorRT-LLM to Windows, announcing an intention to play a bigger role in the inference side of AI.

By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.

Oct 17, 2023, 7:34 PM UTC|

Share this story

Illustration of an Nvidia logo
Illustration by Alex Castro / The Verge

Nvidia looks to build a bigger presence outside GPU sales as it puts its AI-specific software development kit into more applications.

Nvidia announced that it’s adding support for its TensorRT-LLM SDK to Windows and models like Stable Diffusion. The company said in a blog post that it aims to make large language models (LLMs) and related tools run faster.

TensorRT speeds up inference, the process of going through pretrained information and calculating probabilities to come up with a result — like a newly generated Stable Diffusion image. With this software, Nvidia wants to play a bigger part in the inference side of generative AI.

Its TensorRT-LLM breaks down LLMs and lets them run faster on Nvidia’s H100 GPUs. It works with LLMs like Meta’s Llama 2 and other AI models like Stability AI’s Stable Diffusion. The company said by running LLMs through TensorRT-LLM, “this acceleration significantly improves the experience for more sophisticated LLM use — like writing and coding assistants.”

In other words, Nvidia hopes that it will not only provide the GPUs that train and run LLMs but also provide the software that allows models to run and work faster so users don’t seek other ways to make generative AI cost-efficient.

The company said TensorRT-LLM will be “available publicly to anyone who wants to use or integrate it” and can access the SDK on its site.

Nvidia already has a near monopoly on the powerful chips that train LLMs like GPT-4 — and to train and run one, you typically need a lot of GPUs. Demand has skyrocketed for its H100 GPUs; estimated prices have reached $40,000 per chip. The company announced a newer version of its GPU, the GH200, coming next year. No wonder Nvidia’s revenues increased to $13.5 billion in the second quarter.

But the world of generative AI moves fast, and new methods to run LLMs without needing a lot of expensive GPUs have come out. Companies like Microsoft and AMD announced they’ll make their own chips to lessen the reliance on Nvidia. 

And companies have set their sights on the inference side of AI development. AMD plans to buy software company Nod.ai to help LLMs specifically run on AMD chips, while companies like SambaNova already offer services that make it easier to run models as well. 

Nvidia, for now, remains the hardware leader in generative AI, but it already looks like it’s angling for a future where people don’t have to depend on buying huge numbers of its GPUs. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK