16

Apple’s M1 Chip is Exactly What Machine Learning Needs

 3 years ago
source link: https://medium.com/datadriveninvestor/apples-m1-chip-is-exactly-what-machine-learning-needs-507db0d646ae
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Apple’s M1 Chip is Exactly What Machine Learning Needs

AI at the edge.

Image for post
Image for post
Photo by Grzegorz Walczak on Unsplash

Edge Computing

The future of machine learning is at the “edge,” which refers to the edge of computing networks, as opposed to centralized computing.

In a centralized machine learning network, users send data to a server, which makes a prediction, and sends that back to the user. This is slower, more expensive, less reliable, and less secure than edge computing, where predictions are made directly on the user’s device.

The problem with edge computing is that mobile and IoT devices are generally weak and low-powered, while AI models often have intense compute requirements.

Apple’s M1 chip is the answer.

M1 Chip

The M1 is a breakthrough for machine learning at the edge, with the ability to execute 11 trillion operations per second, achieving up to 15x faster machine learning performance.

Using cutting-edge 5-nanometer process technology, the M1 is packed with 16 billion transistors. This doesn’t come at any cost to efficiency, and the latest Mac actually has up to 2x longer battery life.

M1’s Neural Engine

A lot of M1’s efficiency in AI computing owes to the neural engine, a type of NPU, or Neural Processing Unit. Unlike a CPU or GPU, this unit is focused on accelerating neural network operations like matrix math.

You’ve probably heard of another famous NPU out there: Google’s TPU, or Tensor Processing Unit.

The Implications of the M1 Chip

Fast, efficient chips are quickly becoming a must-have, not a nice-to-have. The state-of-the-art language model, GPT-3, has 175 billion parameters, and naturally has intensive compute requirements for inference.

Centralized networks could simply be too slow to deploy these ever-heavier models. Devices with the M1 chip — currently the MacBook Air, Mac mini, and MacBook Pro — make the process of training and deploying AI models on-device much more feasible.

The A14 is a Close Runner-up

You may have noticed that the iPhone is not included in the list of devices with the M1 chip.

Instead, the iPhone 12 has what’s called the “A14 Bionic” chip, an 11.8-billion transistor powerhouse that has a fast neural engine, a new image signal processor, and 70% faster machine learning accelerators.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK