Guest Post by Steven K. Esser, Rathinakumar Appuswamy, Paul A. Merolla, and John V. Arthur.
At the 2015 Neural Information Processing Systems (NIPS) conference, in a paper entitled Backpropagation for Energy-Efficient Neuromorphic Computing, we will be presenting our latest research in adapting machine learning techniques for use in training TrueNorth networks. In essence, this is our first step towards bringing together deep learning (for offline learning) together with brain-inspired computing (for online delivery).
This work is driven by an interest in using neural networks in embedded systems to solve real world problems. Such systems must satisfy both performance requirements, namely accuracy and generalizability, as well as platform requirements, such as a small footprint, low power consumption and real time capabilities. We have seen many recent examples demonstrating machine learning is able to meet performance needs, and other examples that neuromorphic approaches such as the TrueNorth are well suited to the platform needs.
An interesting challenge arises in trying to bring machine learning and neuromorphic hardware together. To achieve high efficiency, TrueNorth uses spiking neurons, discrete synapses and constrained connectivity. However, backpropagation, the algorithm at the core of much of machine learning, uses continuous-output neurons, high precision synapses, and typically operates with no limits on the number of inputs per neuron. How then can we build systems that take advantage of algorithmic insights from machine learning and the operational efficiency of neuromorphic hardware?
In our work, we demonstrate a learning rule and network topology that reconcile this apparent incompatibility by training in a continuous and differentiable probabilistic space that has a direct correspondence to spikes and discrete synaptic states in the hardware domain. Using this approach, we achieved near state-of-the-art performance on the MNIST handwritten digit dataset (99.42%), and the best accuracy to date using spiking neurons and/or low-precision discrete synapses. We have demonstrated three orders of magnitude less energy per classification than the next best low power approach.
Accuracy and energy of network trained using our approach running on the TrueNorth chip. Ensembles of multiple networks were tested, with ensemble size indicated next to each data point. It is possible to trade-off accuracy versus energy.
The software behind the published algorithm is already in the hands of nearly 100 developers most of whom attended the August 2015 Boot Camp, and we expect a number of new results in 2016.
Looking to the future, we are working to expand the repertoire of machine learning approaches for training TrueNorth networks. We have exciting work brewing in our lab using TrueNorth with convolution networks, and have achieved near state of the art on a number of additional datasets.
The paper can be found at https://papers.nips.cc/paper/5862-backpropagation-for-energy-efficient-neuromorphic-computing