Introduction of Spiking Neural Network — solution for sustainable AI

Khoa Le, Ph.D.
3 min readJun 30, 2019

--

Last Wednesday, I had the honor to present at the meetup organized first time by Klanik in Sophia Antipolis, France. The subject of the presentation was sustainable AI.

Actually, deep learning is the dominating approach in AI, it outperforms Machine Learning algorithms or classical symbol methods in cognition tasks like image recognition, object detection, voice recognition, etc.

However, the computation demand to train a deep learning model is huge, and this computation power comes from multiple GPUs which cause a negative effect on the environment:

This is just the statistic number for one training, in order to be successful maybe we have to do experiments for years, and the growing AI community scales the problem even worse.

So what is the solution? The mainstream right now is to apply the accelerator in hardware to optimize the structure of the neural network. Even though this approach could reduce energy consumption by some factor, the efficiency is still low because there are many redundant computations to be made over and over again. E.g. an object detection neural network will make the same computation on the background even when it stays the same for every frame.

So there is another solution in applying Neuromorphic Computing to solve the urgent energy problem which attracts more and more attention from people all over the world. This science is based on biology evidence about the energy efficiency of the human brain. This is an interdisciplinary domain that requires the experts in different backgrounds: neuroscience, mathematics, informatics, electronic and hardware design.

Many research has proved the energy efficiency of our human brain. Basically, it consumes 20W, and do about 1e20 operations in neuron and synapse each second. So the efficiency energy/flop is 2e-20 for the human brain. Compare to GPU, in general, consume about 100W, and do 1e11 operations per second. So our brain is 1e11 times more effective than the GPU. Next, the information in nature is sparse, so as human we do less redundant operations than current deep learning method.

Next, with the EEG and EMG technology, we are able to visualize the information transfer between neurons in our brain, and we have seen that these signals are in the form of spike trains, which is sparse and asynchronous.

It gives a motivation for a new generation of deep learning algorithm called Spiking Neural Network (SNN). This kind of network employs spike trains as a computational unit. You may wonder if we are going backward by the return from neurons that emit numerical value back to neurons that give binary output. However, inside neuron concept of SNN is an internal state, corresponding to the membrane potential in biology neuron. This membrane potential keeps the time information of the spike trains, and therefore the SNN is able to deal with the spatial-temporal information of spike trains. While in the current deep neural network (DNN), the time information is only considered in Recurrent Neural Network. Therefore SNN is potentially able to deal with more ‘meaningful’ information from spikes than DNN.

The function of SNN will be explained more detail in the second part of this introduction. Thank you for your support.

--

--

Khoa Le, Ph.D.

I do Data Science on Medical Imaging and Finance, and love them both.