1

Artificial Intelligence: Looking Deeper at Neurons

 1 year ago
source link: https://devm.io/machine-learning/artificial-general-intelligence-neurons
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

AGI: The most exciting project on the planet - Part Two

Artificial Intelligence: Looking Deeper at Neurons

Charles Simon

04. Nov 2022


Looking deeper at neurons, the brain has billions of them hooked together in a big network. The individual neuron generates a spike, which travels down the axon, goes to the synapses, connects to other neurons that it's connected to, and transmits a signal. The signal travels down the axon at approximately one meter per second. This is not an electrical signal, but a kind of a chain reaction of ions going from place to place. The neuron itself is extremely slow.

If you play with neurons in a simulator for a few years, you learn what neurons are good at doing and what they're not so good at doing. They're really good at signal differentiation (for example, in your eye you can detect boundaries of various colors or brightnesses with considerable accuracy). You can detect the arrival time of multiple signals and use this for directional sound.

You can have short-term memory in neuron firing or in the charge state in a neuron. This is really short-term – a memory you can establish with a single spike. You don't have to do any synapse adjustment. You can store something for a short period of time and the storage can happen very quickly. This storage mechanism might be used in detecting if objects are in motion. Your neurons need to store the current visual field very briefly in order to detect if anything has moved.

Synapses can also store memory in their configuration, the physical layout of the ions, and the synapse size. Once established, information in synapses requires no energy to keep indefinitely, so your brain can learn things now, set them in the configuration of the weights of your synapses, and store them indefinitely without burning energy. On the downside, it takes numerous neural spikes to adjust a synapse to a particular value so this method of storage is much slower.

Neurons are really slow – time frames of four milliseconds between spikes – which has a dramatic impact on the kinds of functions they can perform. Likewise when we talk about synapses, you can set the synapse weight with a curve that is called Hebbian learning. But when you look at underlying data, there's lots of scatter. It's essentially impossible to set a specific synapse to a specific weight in a short period of time.

Synapses can also store memory in their configuration, the physical layout of the ions, and the synapse size. Once established, information in synapses requires no energy to keep indefinitely, so your brain can learn things now, set them in the configuration of the weights of your synapses, and store them indefinitely without burning energy.

The neuron’s slowness and synapse weight problems mean you cannot represent many distinct values in biological neurons. If you want a neuron’s firing rate to represent a value between 0 and 10, you need some amount of time, perhaps 40ms, to represent that value. The more different values you want to represent, the slower your system is going to run. And because the neuron is slow to start with, representing 100 different values is going to make it uselessly slower. Because of that, machine learning is not particularly viable in biological neurons.

This leads me to graphs – collections of nodes connected by edges. Neuron clusters can do a great job of representing a graph, but individual neurons cannot because the edges of the graph must be able to be traversed in both directions while individual synapses are unidirectional. That means that edges of the graph cannot be individual synapses and it takes additional neurons in the cluster to ensure that the correct edge direction is being traversed.

We know there has to be some amount of data area in your brain that is a graph because you can know that yellow is a color and blue is a color, and from that you can be asked to name some colors and respond with yellow and blue. I can also tell you that foo is a color and bar is a color, and now you can name some colors and you can say yellow, foo, and bar. You can train your mind with a single instance of information and you know the reverse of that information instantly. That tells me there's got to be a graph.

Fig. 1

Fig. 1: What neurons are really good at

So the question is not whether or not there's a graph in your brain, but whether it's a small graph and a lot of other stuff or a big graph and a little bit of other stuff.

Suppose your entire neocortex was a graph.

Fig. 2

Fig. 2: What if the neocortex were a graph?

You go to the neuron simulator and say this is what it would take to build graph nodes. At minimum, I could build a graph with eight neurons per node. You can't use one neuron per node because synapses are one directional, but you must be able to traverse this information bi-directionally for it to be useful. So the minimum I could come up with was eight neurons per node, plus two additional neurons for every edge type you want. I only programmed up two edge types, but you can add as many as you want.

You can train your mind with a single instance of information and you know the reverse of that information instantly.

A problem with this design is that the failure of any single neuron or synapse will cause the system to fail. We know the brain is highly redundant and highly capable of surviving failures. Neurons fail all the time and they don't seem to bother you very much. This implies that it's about a hundred neurons per node because that gives you redundancy and the ability to track how recently a node was used so you know when to forget.

Doing a bit of division, if there are only 16 billion neurons in your neocortex, you have a maximum of 160 million nodes. That may sound like a lot, but Wikidata is a knowledge graph and it has about 100 million nodes and a billion edges and contains a ton more information than my brain. If we're talking about a system of 160 million nodes, I can put that on a desktop computer. You can add, delete, and modify nodes and edges. You can search it millions of ways. In the brain, it's likely there are not very many ways you can search, but a lot of redundancy.

Charles Simon
Charles Simon

Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI. Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of two books - Will Computers Revolt?: Preparing for the Future of Artificial Intelligence and Brain Simulator II: The Guide for Creating Artificial General Intelligence - and the developer of Brain Simulator II..


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK