24

Neural network implemented with light instead of electrons

 6 years ago
source link: https://www.tuicool.com/articles/hit/MZVbuin
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Neural networks have a reputation for being computationally expensive. But only the training portion of things really stresses most computer hardware, since it involves regular evaluations of performance and constant trips back and forth to memory to tweak the connections among its artificial neurons. Using a trained neural network, in contrast, is a much simpler process, one that isn't nearly as computationally complex. In fact, the training and execution stages can be performed on completely different hardware.

And there seems to be a fair bit of flexibility in the hardware that can be used for either of these two processes. For example, it's possible to train neural networks using a specialized form of memory called a memristor or execute trained neural networks usingcustom silicon chips. Now, researchers at UCLA have done something a bit more radical. After training a neural network using traditional computing hardware, they 3D printed a set of panels that manipulated light in a way that was equivalent to processing information using the neural network. In the end, they got performance at the speed of light—though with somewhat reduced accuracy compared to more traditional hardware.

Lighten up

So how do you implement a neural network using light? To understand that, you have to understand the structure of a deep-learning neural network. In each layer, signals from an earlier one (or the input from a source) are processed by "neurons," which then take the results and forward signals on to neurons in the next layer. Which neurons they send it to and how strong a signal they pass on are determined by the training they've undergone.

To do this with light, the UCLA team created a translucent, refracting surface. As light hits it, the precise structure of the surface determines how much light passes through and where it's directed to. If you place another, similar layer behind the first, it'll continue to redirect the light to specific locations. This is similar in principle to how the deep-learning network works, with each layer of the network redirecting signals to specific locations in the layer beyond.

In practical terms, the researchers trained a neural network, identified the connections it made to the layer beneath it, and then translated those into surface features that would direct light in a similar manner. By printing a series of these layers, the light would be gradually concentrated in a specific area. By placing detectors in specific locations behind the final layer, they're able to tell where the light ends up. And, if everything's done properly, where the light ends up should tell us the neural network's decision.

The authors tried this with two different types of image-recognition tasks. In the first, they trained the neural network to recognize hand-written numbers, and then they translated and printed the appropriate screens for a grid of 10 photodetectors to record the output. This was done with a five-layer neural network, and the researchers duly printed out five layers of light-manipulating material. To provide the neural network with input, they also printed a sheet that allowed them to project the objects being recognized onto the first layer of the neural network.

Layering

When the UCLA researchers did this with hand-written digits, they ran into a problem: many digits (like 0 and 9) have open areas surrounded by the written portion of the digit. To 3D print a mask for projecting light in the shape of the digit, this has to be converted to a negative, with a filled-in area surrounded by open space. And that's pretty difficult to 3D print, since at least some material has to be used to keep the keep the filled-in area attached to the rest of the screen. This, they suspect, lowered the accuracy of the identification task. Still, they managed over 90-percent accuracy.

They did even better when performing a similar test with items of clothing. While the total accuracy was only 86 percent, the difference between running the neural network in software and running it using light was smaller. The researchers suspect the differences in performance mostly come down to the fact that full performance requires an extremely accurate alignment among all the layers of the neural network, and that's hard to arrange when the layers are small physical sheets.

This may also explain why adding more layers to the light-based neural network had a very modest impact on accuracy.

Overall, it's extremely impressive that this works at all. Matching the wavelengths and materials to get the light to bend in the right ways, 3D printing within sufficiently high tolerances to recapitulate the trained neural network—a lot had to come together to get this to work. And while performance is down compared to computer-based implementations, the researchers suspect that at least some of the problems are things that can be fixed by developing a better system to align the sheets that make up the different layers of the network, although that's a challenge that should increase with the number of layers in the neural network.

And the authors seem to think it might be practically useful. They highlight the fact that calculations using light are extremely fast and most light sources are very low power.

But there are definitely some practical hurdles. The material is specific to a single wavelength of light, meaning that we can't just stick anything in front of the system and expect it to work. Right now, that's ensured by the projection system, but that relies on 3D printing a sheet to project a specific shape, which isn't exactly a time-efficient process. Replacing these with a sort of monochrome projector system should be possible, but it's not clear how much resolution matters to the system's accuracy. So there's some work to do before we'll know if this sort of system could have practical applications.

Science , 2018. DOI: 10.1126/science.aat8084 (About DOIs).


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK